Monday, November 01, 2010

International Commons Conference Kicks Off

The International Commons Conference starts in Berlin today -- with the aim of "Constructing a Commons-Based Policy Platform".

Essentially the aim is to bring together representatives from all the open, free and "commons" movements to discuss what they share in common -- in the hope that a more unified approach will emerge, and the necessary networks will be created to enable the larger movement become more politically effective.

I discussed the background to the conference in a recent interview with the event organiser Silke Helfrich here.

As Helfrich put it, "The global movement of commoners today is eclectic and growing, but fragmented ...Taken together all these movements are actually part of a big civic movement that is about to discover its own identity, just as the environmental movement did some 30 or 40 years ago."

She added: "Co-operation is the best way for them to grow and become politically relevant. So the goal should be to persuade the various advocates that they have much to gain from working together."

The conference can be followed via video stream here.

Monday, October 18, 2010

Interview with Jordan Hatcher

Over the past twenty years or so we have seen a rising tide of alternative copyright licences emerge — for software, music and most types of content. These include the Berkeley Software Distribution (BSD) licence, the General Public Licence (GPL), and the range of licences devised by Creative Commons (CC). More recently a number of open licences and “dedications” have also been developed to assist people make data more freely available.

The various new licences have given rise to terms like “copyleft” and “libre” licensing, and to a growing social and political movement whose ultimate end-point remains to be established.

Why have these licences been developed? How do they differ from traditional copyright licences? And can we expect them to help or hinder reform of the traditional copyright system — which many now believe has got out of control? I discussed these and other questions in a recent email interview with Jordan Hatcher.

A UK-based Texas lawyer specialising in IT and intellectual property law, Jordan Hatcher is co-founder of OpenDataCommons.org, a board member of the Open Knowledge Foundation (OKF), and blogs under the name opencontentlawyer.

Jordan Hatcher

Big question

RP: Can you begin by saying something about yourself and your experience in the IP/copyright field?

JH: I’m a Texas lawyer living in the UK and focusing on IP and IT law. I concentrate on practical solutions and legal issues centred on the intersection of law and technology. While I like the entire field of IP, international IP and copyright are my most favourite areas.

As to more formal qualifications, I have a BA in Radio/TV/Film, a JD in Law, and an LLM in Innovation, Technology and the Law. I’ve been on the team that helped bring Creative Commons licences to Scotland and have led, or been a team member on, a number of studies looking at open content licences and their use within universities and the cultural heritage sector.

I was formerly a researcher at the University of Edinburgh in IP/IT, and for the past 2.5 years have been providing IP strategy and IP due diligence services with a leading IP strategy consultancy in London.

I’m also the co-founder and principal legal drafter behind Open Data Commons, a project to provide legal tools for open data, and the Chair of the Advisory Council for the Open Definition. I sit on the board for the Open Knowledge Foundation.

More detail than you can ask for is available on my web site here, and on my LinkedIn page here.

RP: It might also help if you reminded us what role copyright is supposed to play in society, how that role has changed over time (assuming that you feel it has) and whether you think it plays the role that society assigned to it successfully today.

JH: Wow that’s a big question and one that has changed quite a bit since the origin of copyright. As with most law, I take a utilitarian / legal realist view that the law is there to encourage a set of behaviours.

Copyright law is often described as being created to encourage more production and dissemination of works, and like any law, its imperfect in its execution.

I think what’s most interesting about copyright history is the technology side (without trying to sound like a technological determinist!). As new and potentially disruptive technologies have come along and changed the balance — from the printing press all the way to digital technology — the way we have reacted has been fairly consistent: some try to hang on to the old model as others eagerly adopt the new model.

For those interested in learning more about copyright’s history, I highly recommend the work of Ronan Deazley, and suggest people look at the first sections in Patry on Copyright. They could also usefully read Patry’s Moral Panics and the Copyright Wars. Additionally, there are many historical materials on copyright available at the homepage for a specific research project on the topic here.

Three tranches

RP: In the past twenty years or so we have seen a number of alternative approaches to licensing content develop — most notably through the General Public Licence and the set of licences developed by the Creative Commons. Why do you think these licences have emerged, and what are the implications of their emergence in your view?

JH: I see free and open licence development as happening within three tranches, all related to a specific area of use.

1. FOSS for software. Alongside the GPL, there have been a number of licences developed since the birth of the movement (and continuing to today), all aimed at software. These licences work best for software and tend to fall over when applied to other areas.

2. Open licences and Public licences for content. These are aimed at content, such as video, images, music, and so on. Creative Commons is certainly the most popular, but definitely not the first. The birth of CC does however represent a watershed moment in thinking about open licensing for content.

I distinguish open licences from public licences here, mostly because Creative Commons is so popular. Open has so many meanings to people (as do “free”) that it is critical to define from a legal perspective what is meant when one says “open”. The Open Knowledge Definition does this, and states that “open” means users have the right to use, reuse, and redistribute the content with very few restrictions — only attribution and share-alike are allowed restrictions, and commercial use must specifically be allowed.

The Open Definition means that only two out of the main six CC licences are open content licences — CC-BY and CC-BY-SA. The other four involve the No Derivatives (ND) restriction (thus prohibiting reuse) or have Non Commercial (NC) restrictions. The other four are what I refer to as “public licences”; in other words they are licences provided for use by the general public.

Of course CC’s public domain tools, such as CC0, all meet the Open Definition as well because they have no restrictions on use, reuse, and redistribution.

I wrote about this in a bit more detail recently on my blog.

3. Open Data Licences. Databases are different from content and software — they are a little like both in what users want to do with them and how licensors want to protect them, but are different from software and content in both the legal rights that apply and how database creators want to use open data licences.

As a result, there’s a need for specific open data licences, which is why we founded Open Data Commons. Today we have three tools available. It’s a new area of open licensing and we’re all still trying to work out all the questions and implications.

Open data

RP: As you say, data needs to be treated differently from other types of content, and for this reason a number of specific licences have been developed — including the Public Domain Dedication Licence (PDDL), the Public Doman Dedication Certificate (PDDC) and Creative Commons Zero. Can you explain how these licences approach the issue of licensing data in an open way?

JH: The three you’ve mentioned are all aimed at placing work into the public domain. The public domain has a very specific meaning in a legal context: It means that there are no copyright or other IP rights over the work. This is the most open/free approach as the aim is to eliminate any restrictions from an IP perspective.

There are some rights that can be hard to eliminate, and so of course patents may still be an issue depending on the context, (but perhaps that’s conversation for another time).

In addition to these tools, we’ve created two additional specific tools for openly licensing databases — the ODbL and the ODC-Attribution licences.

RP: Can you say something about these tools, and what they bring to the party?

JH: All three are tools to help increase the public domain and make it more known and accessible.

There’s some really exciting stuff going on with the public domain right now, including with PD calculators — tools to automatically determine whether a work is in the public domain. The great thing about work in the public domain is that it is completely legally interoperable, as it eliminates copyright restrictions.

RP: Are there now open, free or public licences for every type of content?

JH: There’s at least something out there for everything that I know of, though there are edge cases in openly licensing trademarks or in some patent communities. Who knows though what we’ll be talking about in 10 years?!

RP: You said that non-commercial restrictions do not conform to the Open Knowledge Definition of open. In fact, many people argue that NC makes no sense at all, not least because it is practically impossible to define what non-commercial means. What are your thoughts on this?

JH: The arguments against Non Commercial restrictions tend to centre on the fact that it breaks compatibility with other, open, licences (as “open” means allowing commercial use). While NC restrictions aren’t open, that doesn’t mean that they aren’t useful. Many successful publishers and authors in fact incorporate NC restrictions as part of their online strategy.

Creative Commons did a study to try to understand more about what people mean with NC, and found that many licensors and licensees generally agree on the broad activities covered by the non-commercial restriction. While there are challenges with defining some of the edge cases around non-commercial use, there’s a definite norm built into using it as a licensing term.

Distributed production

RP: How important do you think the rise of digital media and the Internet have been in the emergence of free and open licensing (and the free and open source software movements that have accompanied them)?

JH: Digital technology and the internet have been absolutely critical in the emergence and role of free and open licensing. Free and open licensing is a tool to harness and encourage distributed production — lots of people working at different times and in different places. That’s the great thing about open source — giving access to the human readable code allows the “many eyes” method of production that Eric S. Raymond talks about.

One of my favourite examples of the power of distributed production (though not open licensing) is anime fansubs. An anime show can go on the air in Japan and in less than 24 hours be translated into English, subtitles inserted, format shifted, and then distributed out on the web via a worldwide network of unpaid people. Now of course whether that activity is legal is a whole different question, which I’ve written about a bit here.

RP: Most people seem to think that open and free licences provide a new type of copyright. That is not strictly accurate is it? Do they not rather simply separate out all the different rights that come with copyright today, and allow rights owners to assert those rights they wish to assert, and waive the others — and in a standardised way?

JH: You’re right — open licensing is not a new type of copyright — it’s the exact same copyright bundle of rights that the RIAA or the MPAA uses to enforce their rights. Open licensing just structures the relationship differently by giving broad permissions up front for the work with few restrictions, while the typical licensing approach is often to have broad restrictions and limited permissions.

Using public licences helps standardise and form a community around the various open licences, which ups their adoption and lowers the barrier of using openly licensed material by making it easier to figure out your obligations once (by understanding a single licence) across a broad range of works.

One has to be careful when using the term “waiver” however — waiver means giving up your rights, i.e. you no longer have the right. A licence means that you still have the right, but give permission for certain types of use.

Open licences don’t normally waive other rights — they licence them. By contrast, public domain dedications (PDDL or CC0 for example) are primarily waivers — because they try to help people totally give up their rights in copyright (and database rights).

Licence proliferation

RP: We mentioned the GPL and the CC licences, but there are also open source licences like the BSD, the Artistic Licence, the Apache Licence, the Mozilla Public Licence, and the Microsoft Public and Microsoft Reciprocal Licences? Some argue that there are now simply too many alternative licences. What are the issues associated with licence proliferation and what is the solution (is there one)?

JH: The main issue with licence proliferation is one of interoperability. Some open licences aren’t legally interoperable with others, and so what can happen is that various licence silos can be created.

There’s not an easy solution to this, though using a licence that plays well with lots of others (such as the BSD family of licences, CC-BY, ODC-Attribution, and of course public domain tools) can help ensure lots of interoperability.

RP: How can people find their way through the jungle of alternative licences now available? How can they know what licence is appropriate for them?

JH: There are lots of resources available online for people to find out about the various free and open licences available out there. When considering a licence for something new, the best place to start is not with a licence but with asking yourself, or asking the business: What is the goal you are trying to accomplish?

Building from those answers and the type of material (data, software, content) you can then pick the open licence that most fits those goals.

RP: If one was trying to sketch out a rough guide explaining the key characteristics of the different types of alternative licences how would you go about it — for instance, people use terms like free, open, gratis vs. libre; and they talk about “share alike”? Is this not overly confusing?

JH: I see very little difference from a legal perspective when people talk about free vs. open vs. libre. They all use copyright to accomplish broadly the same goals (attribution, copyleft/share alike) and so it’s more a social/political aspect rather than a straight legal distinction. The incorporation of libre into the debate of course produces a great acronym to discuss it all — FLOSS licensing!

RP: The GPL is often referred to as being “viral”. What does that mean in practice?

JH: “Viral” is such a poor word to describe it, as it implies that like a real virus you have no choice about being “infected” with it. “Viral” is used to describe what’s been variously described as copyleft; share alike; or reciprocal licensing.

It’s a pretty simple concept really — if you build on someone else’s work you have to use the same licence they used for their stuff for your contributions. It’s kind of like the golden rule, but for software licensing.

RP: So people opt to embrace copyleft; it is not something foisted on them involuntarily?

JH: Right. And it is voluntary because there is nothing that forces you to use the work of other people — it’s a choice. Just like if I choose to use software from a proprietary software vendor they will typically have all sorts of restrictions on what I can and can’t do with the code; and if I don’t like it I can use an alternative, or not use it at all.

I think the key problem for people who describe copyleft as “viral” is simply one of control — compared with other IT contracting they (often) don’t have the option to negotiate different terms to the licence and so see it as forcing them to do something that they’d prefer not to do.

Social and political issues

RP: As you said, there are also social and political issues at work here. As a result, there have been a number of ideological disputes about the new-style licences. Free software advocates, for instance, have criticised some of the Creative Commons licences, and indeed some have criticised the entire political logic of CC. Can you explain the background to this, and whether the issue has been settled?

JH: I don’t want to put words into anyone’s mouth, and trying to sum up the number of disputes out there quickly wouldn’t do them justice. Like anything, CC has its flaws and its benefits, and in any free (as in libre) society — and especially as part of an overall open movement — these should be discussed.

RP: Some argue that the problem with Creative Commons is that it seeks to work around the copyright system rather than reform it, and so could end up bolstering an IP system that many feel has got out of control. Is this a valid criticism? Could it perhaps also be said of free software licences like the GPL?

JH: To some CC is an escape valve letting off just enough steam to prevent the copyright boiler from exploding, when they would rather the whole thing exploded, and so make it necessary to rewrite copyright law.

In this view, CC prevents some critical legal reform by giving solutions to people who otherwise would be doing all the things necessary to get legislators moving faster. It’s certainly a valid criticism but I think it’s safe to say they’ve lost that fight. We have CC licences; they’ve been ported worldwide; and so they are in use globally in a wide variety of contexts.

I think that CC might actually work in the opposite direction of this argument — by making copyright law more accessible to people, and so helping them understanding the sometimes negative impact that copyright can have on their daily lives, maybe more people will become politically active in this area. Who knows? It could be an interesting research topic, and either way I’m sure we’ll find out in the next couple of years who was right.

RP: I wonder if perhaps one of the biggest problems posed by the copyright system today is the so-called orphan works phenomenon — which flows from the fact that in most, if not all, jurisdictions copyright now comes into effect the moment a work is created or expressed. Is this a serious problem? If so, what is the solution?

JH: I see the main cause of the orphan works problem to be that automatic copyright (the default baked into international treaties) lasts so long, not so much that it is automatic in the first place. Many jurisdictions have terms of life + 70 years, and the real orphan works problem starts way down the line when no one remembers, or has any records of, who the actual author or rights holder is. So while we may know roughly when a work was created, and so whether or not it is in copyright, we just don’t know who holds the rights.

Orphan works are a serious problem mainly because they represent such an unknown risk: If you don’t know who the rights holder is there is no chance of acquiring a licence (because the rights holder is unknown). Moreover, the seemingly ever increasing penalties for copyright infringement constantly raise the stakes. This poses a really serious risk for cultural heritage institutions — our collective memory — who have lots of interesting material that they’re not sure what to do with due to it having this unknown copyright status.

As to a solution, there are many options being discussed, from more radical suggestions like having a very short copyright term, to proposals that would work within the current framework such as compulsory licensing. Who knows where we’ll end up, but I think it’s becoming clearer to legislators throughout the world that something must be done.

Still very much up for debate

RP: One problem that surely won’t go away any time soon for individuals creating copyrighted works is that if someone infringes their copyright there is little they can do about it unless they have access to a lot of money — because access to the law is usually very expensive. Would you agree?

JH: I don’t think I’d agree with that at all. Of course a lot depends on the specific facts and jurisdiction. But just sending an email to people and asking for them to comply can often get results, which is free! And in many jurisdictions, legal counsel may take a copyright case on contingency if it needs to proceed further.

Access to justice is an issue across many areas in the law of course, and not an issue exclusive to copyright law.

RP: Critics argue that that some courts may not recognise licences like the GPL and the CC licences. There have now been a number of cases involving these licences. What do we learn from these so far as enforceability is concerned?

JH: While there are a few cases now, I think the focus for enforcement of FOSS licences is and always has been on the community of practice built up around the licences by both the FOSS community and business users.

Eben Moglen and Richard Stallman often describe the GPL as the constitution of the free software movement. I think that’s a very apt description, as like a constitution and a society, there’s lots of enforcement through social norms and norms of interpretation.

The same is true for the CC licences — just by having such a large community of users, including businesses, help enforcement. Simply naming and shaming people that don’t meet these norms gets lots of people to come into compliance.

RP: You said earlier that when technology changes some people try to hold on to the old model, and others embrace the new model. Since IP law has become deeply embroiled in the often-heated discussions about appropriate models, and since — like any law — it determines what people can and cannot do, and so what models are possible, some believe that it has become part of a much larger struggle between open and closed models. If that is right, then presumably one of those models will eventually win. Do you agree? If so, how will the struggle end? And what model do you think will win?

JH: I don’t see open and closed models as a (huge) battle that one side will win or lose. Open licensing is a legal tool that is also associated to various degrees with various social and political movements. Open licensing models, and approaches like, open innovation, are one option out of many approaches, and I think it will stay that way for a long time.

Copyright law will likely change, and technology will change and impact the need and rationale both for copyright law specifically and many other areas of law generally. In the end, however, I fundamentally believe in a role for copyright and IP law in society. What that role is and how it’s played is still very much up for debate.

Monday, October 04, 2010

Silke Helfrich on the commons and the upcoming International Commons Conference

As more and more of the world’s population has gained access to the Internet so a growing number of free and open movements have appeared — including the free and open source software movements, free culture, creative commons, open access and open data.

Once these movements became widely visible — and successful — people were keen to understand their significance, and establish what, if anything, they have in common. Today many observers maintain that they share very similar goals and aspirations, and that they represent a renaissance of “the commons”.

There is also an emerging consensus that, contrary to what was initially assumed, this renaissance is not confined to the Internet, and digital phenomena, but can also be observed in the way that some physical products are now manufactured (e.g. by advocates of the open source hardware movement) and in the way that many are now recommending the natural world be managed.

For instance, argue self-styled “commoners”, when local farmers establish seed banks in order to preserve regional plant diversity, and to prevent large biotechnology companies from foisting patent-protected GMO crops on them, their objectives are essentially the same as those of free software developers when they release their software under the General Public Licence: Both are attempting to prevent things that rightfully belong in common ownership from being privatised — usually by multinational companies who, in their restless pursuit of profits, are happy to appropriate for their own ends resources that rightfully belong to everyone.

Once understood in this broader context, commoners add, it becomes evident that the free and open movements have the potential to catalyse radical social, cultural and political change; change that, in the light of the now evident failures of state capitalism (demonstrated, for instance, by the global financial crisis) are urgently required.

Larger movement

In order to facilitate this change, however, commoners argue that the free and open movements have to be viewed as component parts of the larger commons movement. In addition, it is necessary to embrace and encompass the other major political and civil society groups focused on challenging the dominance of what could loosely be termed the post-Cold War settlement — including environmentalism, Green politics, and the many organisations and initiatives trying to address both developing world issues and climate change

But to create this larger movement, says Jena-based commons activist Silke Helfrich, it will first be necessary to convince advocates of the different movements that they share mutual objectives. As they are currently fragmented, their common goals are not immediately obvious, and so it will be necessary to make this transparent. Achieving this is important, adds Helfrich, since only by co-operating can the different movements hope to become politically effective.

To this end Helfrich is currently organising an International Commons Conference that will bring together over 170 practitioners and observers of the commons from 34 different countries.

To be held at the beginning of November, the conference will be hosted by the Heinrich Böll Foundation in Berlin.

The aim of the conference, says Helfrich, is to spark “a breakthrough in the international political debate on the commons, and the convergence of the scholars studying the commons and the commoners defending them in the field.” Helfrich hopes this will lead to agreement on a “commons-based policy platform”.

What is the end game? Nothing less, it would appear, than a new social and political order. That is, a world “beyond market and state” — where communities are able to wrest back control of their lives, from faceless, distant government, and from rootless, heartless corporations.

As Helfrich puts it, “the essential ideals of state capitalism — top-down government enforcement and the so called ‘invisible hand’ of the market — have to be marginalised by co-governance principles and self-organised co-production of the commons by people in localities across the world.”

Well qualified

Helfrich is well qualified to organise such a conference. She has already run three conferences on the commons, and she has a deep understanding of development politics. Between 1999 and 2007 she was in charge of the regional office of the Heinrich Böll Foundation for Central America, Mexico and the Caribbean — where she focused on globalisation, gender issues, and human rights.

Since her return to Germany in 2007, Helfrich has developed an international reputation for commons advocacy through her German-language CommonsBlog, and she moderates an interdisciplinary political salon called “Time for the Commons” at the Heinrich Böll Foundation.

She has also written many articles and reports on the commons for civil society organisations, and recently edited an anthology of essays on the commons called To Whom Does the World Belong? The Rediscovery of the Commons.

Helfrich explains the background and purpose to the International Commons Conference in more detail below.

The interview begins …

clip_image002[4]

Silke Helfrich

RP Why did you become interested in the commons?

SH: I was born in East Germany, and when the wall came down in 1989 I was 22 and had just finished my studies. Then I lived for more than eight years in El Salvador and Mexico, both of which are extremely polarised countries so far as the distribution of wealth is concerned.

So I've experienced two very different types of society: one in which the state is the arbiter of social conditions, and the way in which citizens can participate in their society and, after 1989, one in which access to money determines one's ability to participate in society.

It has also always been my belief that democracy should involve much more than simply having free elections and then delegating all responsibility to professional politicians. We need to radically democratise the political, social and economic sphere — and we need a framework for doing so which is beyond both the market and the state. That, in my view, is precisely what the commons is all about.

RP: Can you expand on your definition of the commons, and the potential?

SH: The commons is not a thing or a resource. It’s not just land or water, a forest or the atmosphere. For me, the commons is first and foremost constant social innovation. It implies a self-determined decision making process (within a great variety of contexts, rules and legal settings) that allows all of us to use and reproduce our collective resources.

The commons approach assumes that the right way to use water, forests, knowledge, code, seeds, information, and much more, is to ensure that my use of those resources does not harm anybody else’s use of them, or deplete the resources themselves. And that implies fair-use of everything that does not belong to only one person.

It's about respect for the principle "one person — one share", especially when we talk about the global commons. To achieve this we need to build trust, and strengthen social relationships, within communities.

Our premise is that we are not simply "homo economicus" pursuing only our own selfish interests. The core belief underlying the commons movement is: I need the others and the others need me.

There is no alternative today.

Convergence

RP: Would it be accurate to say that the commons encompasses components of a number of different movements that have emerged in recent years, including free and open source software (FOSS), Creative Commons, Green politics, and all the initiatives focused on helping the developing world etc.?

SH: That's right.

RP: Has it been a natural process of convergence?

SH: From a commoner’s perspective it is a natural process, but it is not immediately obvious that the different movements and their concerns have a lot in common.

RP: How do you mean?

SH: Let me give you an example: When we started to work on the commons in Latin America about six years ago we were working mainly with the eco- and social movements, who were critical of the impact that globalisation and the free trade paradigm were having. A colleague suggested that we should invite people from the free software movement to take part in our discussions.

While we did invite them, our first thought was: What does proprietary software have in common with genetically modified organisms (GMOs)? Or, to put it the other way round, what does the free software movement stand for, and what could it possibly have in common with organisations fighting for GMO free regions? Likewise, what could it have in common with community supported agriculture (CSA), and with movements devoted to defending access to water and social control over their biotic resources?

But we quickly realised that they are all doing the same thing: defending their commons! So since then we have become committed to (and advocate for) the "convergence of movements".

RP: For those who have been following the development of the Internet much of the debate about the commons has emerged from the way in which people — particularly large multinational companies — have sought to enforce intellectual property rights in the digital environment. In parallel there has been a huge debate about the impact of patents on the developing world — patents on life-saving drugs, for instance, and patents on food crops. But seen from a historical perspective these debates are far from new — they have been repeated throughout history, and the commons as a concept goes back even before the infamous enclosures that took place in England in the 15th and 16th Centuries.

SH: That’s right. So to some extent we are talking about the renaissance of the commons.

And the reason why free software developers are engaged in the same struggle as, say, small farmers, is simple: when people defend the free use of digital code, as the free software movement does, they are defending our entitlement to control our communication tools. (Which is essential when you are talking about democracy).

And when people organise local seed-banks to preserve and share the enormous seed variety in their region, they too are simply defending their entitlement to use and reproduce the commons.

In doing so, by the way, they are making use of a cornucopia — because in the commons there is abundance.

RP: Nowadays we are usually told to think of the natural world in terms of scarcity rather abundance.

SH: Well, even natural resources are not scarce in themselves. They are finite, but that is not the same thing as scarce. The point is that if we are not able to use natural collective resources (our common pool resources) sustainably, then they are made scarce. By us!

The commons, I insist, is above all a rich and diverse resource pool that has been developed collectively. What is important is the community, or the people’s control of that resource pool, rather than top-down control. Herein lies the future!

That is precisely what awarding the Nobel Prize in Economics to Elinor Ostrom in 2009 was all about [On awarding the Prize, The Royal Swedish Academy of Sciences commented: “Elinor Ostrom has challenged the conventional wisdom that common property is poorly managed and should be either regulated by central authorities or privatised”].

It is also what the Right Livelihood Award [the so-called Alternative Nobel Prize] — is all about.

Make transparent

RP: Ok, so we are saying that a lot of different movements have emerged with similar goals, but those similarities are not immediately obvious?

SH: Correct. So it is important to make them transparent. The global movement of commoners today is eclectic and growing, but fragmented.

For instance, we can see a number of flourishing transnational commons movements (e.g. free software, Wikipedia, open access to scholarly journals etc.) — all of whom are from the cultural and digital realm, and all of whom are based on community collaboration and sharing.

Many other commons projects, however, are modest in size, locally based, and focused on natural resources. There are thousands of them, and they provide solutions that confirm the point ETC’s Pat Mooney frequently makes: “the solution comes from the edges”.

Right now these different groups barely know each other, but what they all have in common is that they are struggling to take control of their own lives.

Taken together all these movements are actually part of a big civic movement that is about to discover its own identity, just as the environmental movement did some 30 or 40 years ago.
Co-operation is the best way for them to grow and become politically relevant. So the goal should be to persuade the various advocates that they have much to gain from working together.

RP: Would you agree that the Internet has played an important role in the emergence of these movements?

SH: I would. The Internet has been key in the development of global commons projects like free software and Wikipedia, and it greatly facilitates the sharing of ideas — which is key for becoming politically effective.

So the Internet allows us to cooperate beyond the traditional boundaries; and it allows us to take one of the most productive resources of our age — "knowledge and information management" — into our own hands.

Look at the AVAAZ – campaigns for instance. The number of people they are able to connect to and mobilise is amazing. [In 3 years, Avaaz has grown to 5.5 million members from every country on earth, becoming the largest global web movement in history].

One problem, however, is that many communities who are heavily reliant on web-based technologies are not really attuned to the fact that the more we access these kinds of technologies the more we tend to overuse our natural common pool resources. So I think we need to understand that "openness" in the digital realm and "sustainability" in the natural realm need to be addressed together.

RP: Can you expand on that?

SH: We need more than just free software and free hardware. We need free software and free hardware designed to make us independent of the need to acquire a constant stream of ever more resource-devouring gadgets.

So instead of going out every three years to buy a new laptop packed with software that requires paying large license fees to corporations, who then have control over our communication, we should aim to have just one open-hardware-modular-recyclable-computer that runs community-based free software and can last a lifetime.

This is quite a challenge, and it is one of the many challenges we will be discussing at the International Commons Conference. One of the key questions here is this: Is the idea of openness really compatible with the boundaries of (natural) common-pool resources?

Overall objective

RP: What is the overall objective of the International Commons Conference?

SH: To put it modestly (SMILE), the aim is to achieve a breakthrough in the international political debate on the commons, and a convergence of the scholars who are studying the commons and the commoners who are defending them in the field.

We believe that the conference will foster the planning and development of commons-based organisations and policy, as well as their networking capacity. And we hope that by the end of the conference a set of principles and long-term goals will have emerged.

The whole endeavour (or should I say adventure? SMILE) will surely contribute to what my colleague Michel Bauwens — co-organiser of the conference — calls “A Grand Coalition of the Commons”.

RP: I note that there is no dedicated web site or pre-publicity for the conference. And it is by invitation only. Is that because there is not yet a fully articulated consensus on the commons and its potential?

SH: No, we have a much better reason: There has been no need for pre-publicity for the conference. On the contrary, as I frequently find myself having to explain to people, the response to our first "save-the-date-call" for the conference was so overwhelmingly positive that we quickly realised we would be fully booked without any publicity. And in fact we are now more than fully booked.

The conference is by invitation only because we designed the conference programme for those who are already very familiar with the commons, be it through analysing the commons or through producing the commons. Consequently all our participants are specialists. Indeed each one of them would be qualified to address a keynote to the conference.

In other words, what we have designed is a networking conference for commoners from all over the world — and over 170 people from 34 countries have registered. That is quite an achievement, and has only been limited by the availability of space and resources.

I hope, however, that we’ll have a real World Commons Forum within a year or so (SMILE).

Window of opportunity

RP: Do you think the current global financial crisis has opened a window of opportunity for "commoners", as they refer to themselves?

SH: I think so. The current crisis (which is not just a financial crisis, by the way, but multiple crises) graphically demonstrates that we cannot leave policy issues to the politicians, money-related issues to the bankers, or our commons to the market or the state. It's ours!

It also showed quite clearly that the game is over. What is required is not simply a few new rules to allow a further round of the same old game, but a totally new framework; one that forges a new relationship between commons, state and market.

RP: What would this new relationship look like? Is the commons in competition with the state and the market, or do you see it working alongside these two key power brokers?

SH: For me the phrase "a commons beyond market and state" does not necessarily mean without market and state: Commons conceived of as complex systems of resources, communities and rules need very different governance structures. Indeed, some of them will be so complex that a certain governmental institutional structure will be needed — what one might call a Partner State.
One thing, however, is key: the people who depend on these commons for their livelihood and well-being have to have the major stake in all decisions taken about their commons.

Clearly, corporations, companies and co-ops will always meddle with the commons. And whatever they produce they will need our common pool resources as raw material. So the question we need to ask is: what do these players give back to the commons? We cannot allow them to just draw from the commons. The basic principle should be: Whoever takes from the commons has to add to them as well.

In other words, these external agents must not be able to do whatever they want with collective resources. Exclusive, exclusionary private property rights in the commons cannot exist — as outlined in the Commons Manifesto published on the Heinrich Böll Foundation web site.

RP: Would it be accurate to say that the commons is not just a new political and social movement, but a new intellectual framework for understanding the world, and perhaps a catalyst for a new post-industrial social order?

SH: We are not necessarily talking about a post-industrial order, but it is my conviction that a commons paradigm has to be based on the vision of a post-fossil fuel order.

Nor is it even new — as we agreed earlier. I would say it is an old intellectual framework, but one that has to be constantly re-appropriated from below and “modernised”.

But yes, it’s a framework for understanding the world. And it opens minds for finding creative, collective, practical, and institutional solutions to two pressing problems at the same time. That is, the environmental challenge we face and the social problems we face.

RP: There is a school of thought that says the environmental challenge can be solved by the market.

SH: Yes, but I don’t agree. For example, we cannot simply resolve the ecological crisis by charging more and more for energy (i.e. introducing a market-based incentive in order to lower consumption) — because that is not a solution for the poor.
This reminds us that the essential ideals of state capitalism — top-down government enforcement and the so called "invisible hand" of the market — have to be marginalised by co-governance principles and self-organised co-production of the commons by people in localities across the world.

clip_image004[4]

FURTHER READING

Silke Helfrich’s CommonsBlog

The Commons Manifesto in English, and in German

A German language news website on the commons

A German-language downloadable copy of To Whom Does the World Belong?

English articles from To whom does the world belong?

A review of To Whom Does the World Belong? By Alain Lipietz

Silke Helfrich articles, interviews and reports

The commons as a common paradigm for social movements and beyond (English)

Web of life (English)

Telepathology: A true medical commons approach (English)

Commons: The network of life and creativity (English)

With Jörg Haas: The commons a new narrative for our times (English)

Interview in taz: Gebrauch Ja, Missbrauch Nein (German)
Wovon wir alle leben (German)

Report with Rainer Kuhlen, Wolfgang Sachs and Christian Siefkes Gemeingüter Wohlstand durch Teilen (German)

Wednesday, September 01, 2010

Open Notebook Science: Interview with Jean-Claude Bradley

Jean-Claude Bradley is an organic chemist at Drexel University in Philadelphia. As with most scientists, Bradley used to be very secretive. He kept his research under wraps until publication and frequently applied for patents on his work in nanotechnology and gene therapy.

However, he asked himself a difficult question 5 years ago: Was his research having the kind of impact he would like? He had to conclude that the answer was "no", and this was partly a consequence of the culture of secrecy that permeates research today.

So Bradley determined to be more open. Since his collaborators were not of the same mind, he severed his ties with them and, in 2005, he launched a web-based initiative called UsefulChem.

As the name implies, the aim of the initiative was also to work in the world of useful science and, today, Bradley makes new anti-malarial compounds. This is potentially very useful: Malaria kills millions of people each year and, since most of those people live in the developing world, large pharmaceutical companies are disinclined to devote much time to developing new treatments.

And in the interests of openness, Bradley makes the details of every experiment done in his lab freely available on the web. He doesn't limit this to just a description, but he includes all the data generated from these experiments too, even the failed experiments.

He named his new technique Open Notebook Science (ONS).

What exactly is ONS?

How does it differ from Open Access (OA)?

What does ONS mean for researchers?

What does ONS mean for publishers?

What does ONS mean for librarians?

What role do institutional repositories have to play in ONS?

Jean Claude-Bradley answers all these questions and more in an interview published in the September issue of Information Today. The interview is freely available here.

Thursday, August 26, 2010

Open Data: The Panton Discussions

If you are interested in the Open Data (OD) movement but unclear about the issues, or what scientists can do to support the movement, what better way of finding out than by talking to leading OD advocates Peter Murray-Rust of the University of Cambridge and Jordan Hatcher of Open Data Commons.

That was what I did last Tuesday as part of a new initiative called the Panton Discussions. The first in a planned series, the event lasted around two hours and took place in the Panton Arms in Cambridge.

Below is a sample of the kind of questions discussed:

* What is Open Data and why is the OD movement important? What is the problem it aims to fix?

* Amongst the OD tools available there is the Public Domain Dedication and Licence (PDDL), a process of Public Domain Dedication and Certification (PDDC), and Creative Commons Zero (CC0). What are these tools, how do they work, and how do they differ?

* Likewise, there is the Science Commons Protocol for Implementing Open Access Data and the Open Knowledge/Data Definition. How do these differ? Why do we need two similar initiatives?

* More recently we have also seen the introduction of The Panton Principles? What do this initiative provide that was not available before?

* Where does Open Data fit with Open Access (OA)?

* Where does Open Science fit in?

* What about Open Notebook Science (ONS)? Where does OD fit with ONS?

* How should scientists go about making their data open? What pitfalls do they need to avoid?

Help sought

Peter hopes to crowdsource the creation of a transcript of the discussion. Jamaica Jones and Graham Steel have both kindly offered to help, but more volunteers would make the task easier, and quicker. Peter can be contacted by email here.

Thursday, August 12, 2010

Preserving the Scholarly Record: Interview with digital preservation specialist Neil Beagrie

One of the many challenges of our increasingly digital world is that of establishing effective ways of preserving digital information — which is far more fragile than printed material. What are the implications of this for the scholarly record, and where does Open Access (OA) fit into the picture?

In a 1999 report for the Council on Library and Information Resources Jeff Rothenberg , a senior research scientist at the RAND Corporation, pointed out that while we were generating more and more digital content each year no one really knew how to preserve it effectively. If we didn't find a way of doing it soon, he warned, "our increasingly digital heritage is in grave risk of being lost."

In launching the UK Web Archive earlier this year British Library chief executive Dame Lynne Brindley estimated that the Library would only be able to archive about one per cent of the 8.8 million .co.uk domains expected to exist by 2011. The remaining 99 per cent, she said, was in danger of falling into a "digital black hole".

In the context of Rothenberg's earlier warning Brindley's comment might seem to suggest that very little has changed in the past eleven years so far as digital preservation is concerned. But that would be the wrong conclusion to reach. Rather, it draws attention to the fact that digital preservation is not just a technical issue.

As it happens, many of the technical issues associated with digital preservation have now been resolved. In their place, however, a bunch of other issues have emerged — including legal, organisational, social, and financial issues.

What concerns Brindley, for instance, are not the technical issues associated with archiving the Web, but the undesirable barrier that today's copyright laws imposes on anyone trying to do so. Since copyright requires obtaining permission from the owner of every web site before archiving it the task is time consuming, expensive, and quite often impossible.

Clearly there are implications here for the research community.

State of play

So what is the current state of play so far as preserving the scholarly record is concerned?

First we need to distinguish between two different categories of digital information. There is retro-digitised material, which in the research context consists mainly of data created as a result of research libraries digitising their print holdings — journals, books, theses, special collections etc. Then there is born-digital material — which includes ejournals, eBooks and raw data produced during the research process.

It is worth noting that the quantities of raw data generated by Big Science can be mind-boggling. In the case of the Large Hadron Collider, for instance, CERN expects that it will generate 27 terabytes of raw data every day when it is running at full throttle — plus 10 terabytes of "event summary data".

To cater for this deluge CERN has created a bespoke computing grid called the WLCG. While the costs associated with the WLCG will be shared amongst 130 computing centres around the world, the personnel and materials costs to CERN alone reached 100 million Euros in 2008, and CERN's budget for the grid going forward is 14 million Euros per annum.

Of course, these figures by no means represent preservation costs alone, and they are not typical — but they provide some perspective on the kind of challenges the science community faces.

So how is the research community coping with the challenges? With the aim of finding out the Alliance of German Science Organisations recently commissioned a report (which was published in February).

What were the main findings?

So far as retro-digitisation is concerned, the Report points out that funding is limited and "the quantity of non-digitised material is huge". Even so, it adds, there is general concern about "the sustainability of hosting" the data that has been generated from digitisation. This is a particular concern for small and medium-sized institutions.

With regard to born-digital material the Report found that the largest gaps are currently in the "provision for perpetual access for e-journals".

The situation with regard to eBooks and databases is less clear since, as the Report points out, "experience in digital preservation with these content types is currently more limited."

While the Report focused on the situation in Germany the international nature of today's research environment suggests the situation will be similar in all developed nations (Although Germany does have two unique mass digitisation centres).

We should not be surprised that the German Report found the largest gap to be in the preservation of journal content. As we shall see, the migration from a print to a digital environment has disrupted traditional practices and responsibilities, and led to some uncertainty about who is ultimately responsible for preserving the scholarly record.

We should also point out that one important area that the German Report did not look at is the growing trend for scholars to make use of blogs, wikis, open notebooks and other Web 2.0 applications. Should this data not be preserved? If it should, whose responsibility is it to do it, and what peculiar challenges does it raise? As we have seen, for instance, preserving web content is not a technical issue alone. Amongst other things there are copyright issues. (Although as the research community starts to use more liberal copyright licences these difficulties should ease somewhat).

Another recently published report did look at the issue of web-created scholarly content, but reached no firm conclusion. Produced by the Blue Ribbon Task Force, this Report concluded: "[I]n scholarly discourse there is a clear community consensus about the value of e-journals over time. There is much less clarity about the long-term value of emerging forms of scholarly communication such as blogs, products of collaborative workspaces, digital lab books, and grey literature (at least in those fields that do not use preprints). Demand may be hypothesised — social networking sites should be preserved for future generations — but that does not tell us what to do or why."

Open Access

One issue likely to be of interest to OA advocates is whether institutional repositories should be expected to play a part in preserving research output.

Evidence cited by the German Report suggests that repositories are not generally viewed as preservation tools. It pointed out, for instance, that the Dutch National Library's KB e-Depot currently archives the content hosted in 13 institutional repositories in the Netherlands.

The Blue Ribbon Report, by contrast, appears to believe that repositories do have a long-term archiving role. It suggests, for instance, that self-archiving mandates should always be accompanied by a "preservation mandate".

The Report goes on to suggest that the inevitable additional costs associated with repository preservation should be taken out of the institution's Gold OA fund (where such a fund exists).

##

If you wish to read the rest of this introduction, and the interview with preservation specialist Neil Beagrie, please click on the link below. I am publishing it under a Creative Commons licence, so you are free to copy and distribute it as you wish, so long as you credit me as the author, do not alter or transform the text, and do not use it for any commercial purpose.

If you would like to republish the interview on a commercial basis, or have any comments on it, please email me at richard.poynder@btinternet.com.

To read the rest of the introduction and the interview with Neal Beagrie (as a PDF file) click here.

Monday, August 02, 2010

University of Ottawa Press launches OA book initiative

Last week the University of Ottawa Press (UOP) announced a new open access (OA) book initiative. This, it says, will provide "free and unrestricted access to scholarly research". But what does it mean in practice? And what issues arise?

UOP's new initiative is part of a wider open access strategy first unveiled last December. Initially it will consist of making 36 French-language and English-language in-print titles in the arts, humanities and social sciences freely available online via the University of Ottawa's institutional repository (IR), uO Research.

The UOP news is of interest for a couple of reasons.

First, until relatively recently open access was seen as an issue of relevance only to scholarly journals, not books, and for the sciences rather than the humanities.

It is only in the last few years, for instance, that new OA publishers like Bloomsbury Academic, Open Humanities Press (OHP), and re.press have appeared on the scene; and only recently that traditional publishers and university presses have started to introduce OA book initiatives — e.g. The University of Michigan Press' digitalculturebooks project and Penn State University Press' Romance Studies.

Second, unlike Bloomsbury Academic, OHP, re.press, and the University of Michigan, UOP has not released its OA books under creative commons licences, but simply placed the text in a PDF file with the original "all rights reserved" notice still attached to it. (E.g. in this 24 MB file).

UOP's move suggests that traditional presses can no longer afford to ignore the rising OA tide — despite the fact that there is still no tried and trusted business model for OA books. It also demonstrates that there is as yet no consensus on how best to go about it, or what to do about copyright.

The latter issue could prove a source of some confusion for readers of UOP's books.

Libre/Gratis

For instance, anyone who read UOP's announcement that it is providing its books on a "free and unrestricted access" basis who then downloaded one of the books would surely scratch their head when they saw the all rights reserved notice attached to it.

While they could be confident that they were free to read the book, they might wonder whether they were permitted to forward it to a colleague. They might also wonder whether they were free to print it, whether they could cut and paste text from it, or whether they were permitted to create derivative versions.

Free and unrestricted access would seem to imply they could do all those things. All rights reserved suggests quite the opposite — indeed, a copyright lawyer might argue that even downloading a book infringes an all-rights licence.

It does not help that there appears to be no terms and conditions notice on the UOP web site clarifying what readers can and cannot do with the books — as there is, for instance, on PSU's Romance Studies site.

In fact, UOP is only granting permission for people to read, download and print the books.

But it need not be that confusing. OA comes in different flavours, and what UOP is offering is what OA advocates call Gratis OA (that is, it has removed the price barriers); it is not offering Libre OA (which would require removing permission barriers too — i.e. relaxing the copyright restrictions).

Gratis OA is a perfectly legitimate way of providing OA, so long as you make it clear that that is what you are offering. Some, however, might argue that there is a contradiction between what UOP says it is offering and the true state of affairs — that the publisher is claiming to offer something that it is not.

"While there's nothing deceptive in using the term 'OA' for work that is Gratis OA, there is something deceptive in using language suggesting Libre OA for work that is Gratis OA," the de facto leader of the OA movement Peter Suber commented when I asked for his views. Stressing that he has not yet looked at the details of the UOP initiative, Suber added: "The phrase 'unrestricted access' suggests Libre OA."

There is no reason to doubt UOP's motives: It believes that using the term free and unrestricted is accurate given that the OA books do not come with DRM, and "any user with a computer can access the books, download them and read them freely".

Nevertheless, it does seem to be sending out a confusing message. And when putting content online publishers should really aim to be as precise as possible in the terms they use, and the claims they make — particularly in light of the many copyright controversies that have arisen in connection with digital content. UOP has surely failed to do this.

Suber would perhaps agree. "I realise that most people aren't familiar with the Gratis/Libre distinction", he emailed me. "But at the same time, people who do understand the distinction should use it, and could help everyone by describing the Ottawa position accurately. If it's Gratis and not Libre (which I haven't had time to check), then it should be described as Gratis."

We might however add that some OA advocates believe Gratis OA to be an inadequate way of making research available online. And it is noteworthy here that in his definition of OA, Suber assumes Libre OA to be the default. Open access literature, he states, is "digital, online, free of charge, and free of most copyright and licensing restrictions."

Some would doubtless claim that worrying about such matters is a non-issue. After all, they might say, aside from reading it, what more could you possible want to do with a book? So why does it matter whether you make it available online as Gratis OA or Libre OA?

But a few years ago exactly this issue led to some heated debates in connection with making scholarly papers OA, with many insisting that worrying about such matters was a complete irrelevancy — until Peter Murray-Rust pointed out that in a Web 2.0 environment there are very good reasons for providing re-use rights to scholarly work.

Indeed, it was as a result of that long-running debate that the movement eventually hammered out the Gratis/Libre distinction.

It is, of course early days for book publishers, who are still in experimental mode vis-à-vis OA. But they would surely benefit from reviewing some of the debates that have taken place in connection with providing OA to refereed papers.

In order to get UOP's views on these matters I contacted the publisher's eBook Coordinator Rebecca Ross, who kindly agreed to an email interview. Below are her answers.

The good news is that UOP does hope to adopt creative commons licences in the future!

clip_image002[4]

Rebecca Ross, UOP eBook Coordinator

RP: I understand that UOP has made 36 of its books available on an OA basis and these can be accessed via the University's institutional repository. Is there a list of these books you can point me to?

RR: You can browse the books by title here.

RP: Where can people obtain more information about UOP, and its activities?

RR: Unfortunately the UOP website is in a state of transition with a new website launching very soon. To give you a bit of background about UOP, we are Canada's oldest French-language university press and the only fully bilingual (English-French) university press in North America.

RP: How many books does UOP publish each year, and what kinds of books does it publish?

RR: UOP was founded in 1936 and has published over 800 titles. We currently publish 25-30 books annually in four main subject areas: social and cultural studies, translation and interpretation, literature and the arts, and political and international affairs.

RP: How did you choose which books to make OA?

RR: The books were chosen based on input from Michael O'Hearn (UOP Director) Eric Nelson (Acquisitions Editor), Marie Clausén (Managing Editor), Jessica Clark (Marketing Manager) and myself as a collaborative process to determine a collection of books that are diverse in terms of language, date published, and subject matter.

This will help UOP best determine the books that work effectively as open access. For example, we want to test questions like: does an 800 page collected work about social policy work better as OA than a monograph about Canadian literature?

We also wanted to test if an electronic open access version gives a second life to the print edition or generates interest in a second edition. In this sense, open access is also a marketing tool for us to reach a wider audience than traditional marketing.

In our decision process we also made sure to include books whose authors would be amiable to licensing their work open access (we have several authors who are very excited by having their work as open access!), and to include topics that are relevant, timely and even timeless (for example a reappraisal of Stephen Leacock's work).

Free and unrestricted access

RP: The UOP press release says that the books are being made available on a "free and unrestricted access" basis. What does that mean?

RR: All of the books included in the open access collection are protected by copyright. UOP does not support DRM or restrictive access to our eBooks, whether they are part of the open access collection or for sale.

RP: The books are not being made available under Creative Commons licences are they?

RR: The books are not under Creative Commons. The authors granted a non-exclusive distribution license to the Press for providing access via uO Research.

RP: Many OA advocates might argue that OA implies using creative commons licensing. You don't agree?

RR: Where possible, UOP is very interested in moving forward with creative commons licensing. We're learning from our colleagues at Athabasca University Press, who publish, where possible, using a Creative Commons license: (Attribution-Noncommercial-No Derivative Works 2.5 Canada).

The decision to use the current licensing model was made to best align UOP with the University of Ottawa Library and the University’s institutional repository uO Research.

RP: I do not think it says anywhere on your site exactly what users can do with the books. Anyone downloading the files will see a traditional "all rights reserved" notice attached. The UOP announcement, however, says that the works are available on a free and unrestricted access basis. Readers might therefore wonder what exactly they are permitted to do with the text — whether, for instance, they can print them out, whether they can freely copy and distribute them, whether they can cut and paste text from them, and whether they can they create derivative versions. What exactly can they do?

RR: So far as the open access collection is concerned users can read them, download them and print them.

Without DRM we are unable to control what exactly users do with the books, but as I said, they are protected by copyright.

In the end, we are pleased that users are accessing our content and our authors are pleased that their research is reaching a wider audience.

RP: Would you agree that you are offering the books Gratis OA rather than Libre OA? That is, you have removed the price barriers, but not the permission barriers?

RR: Describing UOP's open access collection (as it is now) as Gratis OA rather than Libre OA is both fair and accurate. We've removed the price barrier as a first step; the next step will be working with our authors and editors to remove the permission barriers.

RP: Do you not think that saying the books are being offered on a free and unrestricted access basis might be a slight overstatement. Does not "unrestricted access" imply the removal of both price and permission barriers?

RR: When compared to print books offered at sometimes very high and restrictive prices and made available only in certain parts of the world, I don't think the description of "free and unrestricted access" is an overstatement.

UOP's open access books are free, and their access is unrestricted, any user with a computer can access the books, download them and read them freely. At this stage UOP open access books are protected by copyright: this is partly for us and partly for our authors.

Once UOP's open access program has been fully defined and the level of support we will receive from our host institution is determined, we will be in a better position to remove permission barriers.

In conceptualising UOP's open access program the first objective was to provide a wider reach for our authors and books. Most of our authors write, not to make a living, but to further scholarship and research in their fields; allowing their work to be distributed for free is an excellent way to do so.

As I said, the next step for UOP's open access program will be working in collaboration with our authors and the University of Ottawa Library to remove the remaining permission barriers. We are looking into Creative Commons and defining what it means to offer UOP books as open access.

Right now we are very excited to be involved with open access and looking forward to the next steps of the project.

RP: Would you say you were offering the books as Green OA or Gold OA, or do such distinctions only make sense in the context of journals?

RR: As it stands right now, these distinctions seem appropriate only in the context of journals.

If I had to make the distinction I would say we fall into Green OA because of our participation in uO Research the University of Ottawa's institutional repository.

When preparing and researching for the open access program we found that much of the literature is about open access for journals and many university presses, both in Canada and the United States, are just starting to think about how open access can work for books.

Still at an early stage

RP: Does UOP believe that OA is an inevitable development for scholarly monographs?

RR: The University of Ottawa announced its open access program in late 2009. This includes support to UOP in publishing a collection of OA books. Although there is much research surrounding open access in academic journals, open access book publishing is still at an early stage.

UOP launched this open access collection to determine the effects of open access on our publishing program, to eventually determine what kind of support we require to become an open access press.

It would be difficult to say with certainty that OA in now an inevitable way for scholarly monographs to be published in the future but it does appear that way and UOP is interested in testing and researching this notion.

At this stage, it is UOP's assumption that open access will only suit certain books, for example we are not including any textbooks in the open access collection. However, this assumption is based on previously published books and going forward open access will be an important aspect of UOP's acquisition procedure and publishing program.

RP: When I interviewed Northwestern University Dean Sarah Pritchard about Northwestern University Press earlier this year I suggested that the model many advocate see for OA books is that of making the text freely available online but selling the print version. Pritchard replied that she saw that as a very logical model, and one that she envisages NUP adopting before it moves to a totally OA environment. Is that your view too?

RR: Absolutely. We are in the business of publishing books both in print and electronically. At the moment we are borrowing models and ideas from many of our colleagues within Canada and the US, including Athabasca University Press and the International Development Research Centre.

Sarah Pritchard brings forward many important issues that are relevant to us at UOP. I do believe that electronic versions of print books will drive print sales — that assumption is the backbone of our open access program.

The wider the distribution an author or a publisher has, the better the chance for course adoptions, sales and even translation rights. The model UOP has adopted is a hybrid model: we are doing a bit of everything right now, and we will continue experimenting to see what fits best.

RP: Does UOP pay its way today, or is it subsidised by the University? Can you see OA affecting the current state of affairs?

RR: UOP is subsidised by the University. Our publications are too specialised to make the best-seller list; alas, we will never become a cash cow for our home institution! This is a bit of an experiment.

We don't know if OA will have a negative effect on the sales of the print version or if it will encourage people to buy the print version, especially in the case of single-authored volumes containing long and complex arguments — books like these are likely easier to read in the traditional paper format than on a computer screen.

In either case the level of support the University provides its Press will change accordingly.

Monday, June 28, 2010

Free our data: For democracy's sake

The open data movement is growing apace. What better demonstration of this than news that the UK coalition government is making its Combined Online Information System (COINS) freely available on the Internet, inviting people not only to access the data but to re-use it too?

COINS, The Guardian newspaper points out, is one of the world's biggest government databases and provides, "the most detailed record of public spending imaginable. Some 24m individual spending items in a CSV file of 120GB presents a unique picture of how the [UK] government does its business."

For The Guardian the release of COINS marks a high point in a crusade it began in March 2006, when it published an article called "Give us back our crown jewels" and launched the Free Our Data campaign. Much has happened since. "What would have been unbelievable a few years ago is now commonplace," The Guardian boasted when reporting on the release of COINS.

Why did The Guardian start the Free Our Data campaign? Because it wanted to draw attention to the fact that governments and government agencies have been using taxpayers' money to create vast databases containing highly valuable information, and yet have made very little of this information publicly available.

Where the data has been made available access to it has generally been charged for. Moreover, it has usually been released under restrictive copyright licences prohibiting redistribution, and so preventing third parties from using it to create useful new services.

The end result, The Guardian believes, is that the number and variety of organisations able to make use of the data has been severely curtailed and innovation stifled. As the paper explains, "Making that data available for use for free — rather as commercial companies such as Amazon and Google do with their catalog and maps data — would vastly expand the range of services available."

And it is this argument that has become the rallying cry of the burgeoning open data movement.

But is it true? Is there evidence to demonstrate that by keeping publicly-funded data under wraps governments have been stifling innovation? And does the free availability of government data inevitably lead to a flowering of new information products and services?

The economic argument

In the hope of answering these questions Italian open data advocate Marco Fioretti will shortly be undertaking a research project in conjunction with the Laboratory of Economics and Management (LEM) at the Sant'Anna School of Advanced Studies in Pisa.

The project will form part of an EU-funded work package entitled "Open Data for an Open Society, itself part of a larger initiative called Dynamics of Institutions and Markets in Europe (DIME). DIME is sponsored by the 6th Framework Programme of the European Union.

Explaining the background to Fioretti's project the manager of DIME's open data package, and Associate Professor of Economics at the Sant'Anna School, Dr Giulio Bottazzi says: "We ran an open call for the assignment of part of the activity, specifically the design and realisation of a survey to help assess the present situation concerning the 'openness' of data management in public institutions. Dr Fioretti applied for and obtained the contract — essentially because of his CV and his publication record in the area of open source software and open document standards."

Importantly, Fioretti has a special interest in how better use can be made of public sector information (PSI). And he believes that simply making public data freely available is insufficient. It also needs to be made available using open standards and open licenses. In other words, making digital information freely available is only half the task. Unless it is released in non-proprietary formats, and licensed in a way that allows adaptation and re-use, its usefulness is significantly curtailed.

Fioretti plans to restrict his research to local government data, since he believes it is more likely to be consistent. As he explained recently to O'Reilly Media editor Andy Oram, "the structure of government projects and costs are more similar from one city to another — even across national EU borders — than from one national government to another."

The project will consist of three main phases. First, Fioretti will produce a report discussing the role of fully accessible and reusable digital raw data in an open society. This will be based on examples taken both from the European Union and the rest of the world.

Second, during the summer he will conduct an online survey — which will be hosted on the LEM website. The survey will aim to establish how many EU municipalities and regions are already making their raw data and procedures available by means of open standards and open licenses.

Finally, he will write a further report analysing the results of the survey, and providing some guidelines and best practices for improving full access to digital data.

The grant made available to Fioretti is a modest one — around €12,000 ($14,767), plus an additional €6,000 discretionary fund — but it would seem to be further evidence that the open data movement is gaining mindshare.

In the meantime Fioretti is very keen to hear about real-world examples involving local businesses — both those who have built a successful business (anywhere in the world) by exploiting the free availability of local government data, and those (in Europe) who have struggled to create a viable business as a result of the inaccessibility of this data.

Fioretti can be contacted on mfioretti@nexaima.net.

As noted, Fioretti's research will test the thesis of The Guardian's Free Our Data campaign — i.e. if governments make their data freely available will it encourage businesses to develop new value-added products? And will this in turn spur innovation and create new jobs? Essentially it is an economic argument.

Transparency

But we should not overlook the fact that there are other reasons for governments to open their databases to citizens. Indeed, some might argue that the economic argument should be seen as of secondary importance only. A far more compelling reason for "freeing our data", they might add, is that it increases government transparency, and so is an inherently democratic step to take.

To do him justice, while he appears to have prioritised the economic case in his project, Fioretti is not blind to the transparency argument. As he points out on his web site, in addition to providing access to valuable government-created information and encouraging innovation, open data makes it easier for citizens to monitor their government's activities and how their tax dollars are being spent. "Modern software technologies and data networks make it both possible and relatively inexpensive to publish online tenders, regulations, documents, procedures and many other raw public data, from digital maps to pollution measurements," he says. "Making this information really accessible, that is online, in open formats and under open licenses, can both improve transparency in government and foster local economical and cultural activities."

In putting the transparency case to O'Reilly's Andy Oram, Fioretti cited the planned construction of the Strait of Messina Bridge (at an estimated cost of €6.1 billion). When the government announces how many tax dollars it plans to spend on a project like this, Fioretti asked, how can the public know that the costs are reasonable if it does not have access to all the data?

Evidently in agreement with Fioretti, Oram expanded on the argument: "[C]ontracts must be very specific about the delivery of data that [a government] commissions — and not just the data, but the formulas and software used to calculate results. For instance, if a spreadsheet was used in calculating the cost of a project, the government should release the spreadsheet data and formulas to the public in an open format so that experts can check the calculations."

Certainly it was in the interests of transparency that the UK government released COINS. As the H M Treasury web site puts it, "The coalition agreement made clear that this Government believes in removing the cloak of secrecy from government and throwing open the doors of public bodies, enabling the public to hold politicians and public bodies to account. Nowhere is this truer than in being transparent about the way in which the Government spends your money. The release of COINS data is just the first step in the Government's commitment to data transparency on Government spending."

Coming in the wake of the so-called "MPs' expenses scandal" in Britain, the UK government's decision is far from surprising. What that scandal clearly demonstrated was that Freedom of Information (FoI) legislation alone cannot provide adequate transparency about the way in which governments spend taxpayers' money.

Indeed, it was only after several legal challenges that the UK government eventually conceded the need to be more transparent about MPs' expenses in any case. Even then, it continued to prevaricate and it was only after an insider leaked data to the Daily Telegraph that the public become fully apprised of how irresponsibly British politicians had been spending taxpayer's money for their own personal gain — including using public money to help them fund unnecessary second homes, to buy toilet seats, duck houses etc. etc.

This suggests that it would be a serious mistake to limit the case for open data to the economic argument alone. Governments also need to open their databases so that the public can know exactly how its money is being spent, and exactly how well the government is managing the country on its behalf.

Participation

But there is a third — and arguably even more important — reason why governments should open their databases: Doing so will help citizens participate more directly in the government of their country, and at an important moment in history.

As the publisher Tim O'Reilly points out, if governments were to exploit the digital network effectively they could engage the public in the political process in ways never previously possible in a modern society. Citizens could, for instance, "crowdsource" social, political and economic problems, take over responsibility for tasks traditionally seen as the prerogative of the state (by means of what O'Reilly calls "citizen self-organisation"), and even play a direct role in political decision-making.

This would be a radical change. The traditional top-down model of government assumes that public participation in the political process is necessarily circumscribed, since in large analogue societies collective decision making does not scale. This has seen citizens consigned to simply casting their votes and then sitting back while politicians govern in their name. One consequence of this has been the emergence of a professional political class that now views itself as standing over and above citizens, who tend to be treated as inferior, or simply as children, rather than fellow citizens. Essentially the electorate has been infantilised.

Yet as O'Reilly points out, "government is, at bottom, a mechanism for collective action." And in the age of the Web it is possible to make the political process more horizontal and egalitarian. Rather than viewing government as a hierarchical system in which only elected officials (with the help of professional civil servants) come up with solutions, make decisions, and set the rules, government should now be viewed as a platform to facilitate collective action — much as the Web itself has become a platform across which multiple and diverse applications can run without fear or favour (as advocates of net neutrality like to put it, "all bits are equal").

For this reason, says O'Reilly, the key question for governments today should be, "How do you design a system in which all of the outcomes aren't specified beforehand, but instead evolve through interactions between government and its citizens, as a service provider enabling its user community?"

Clearly this would not be possible unless everyone had access to all the relevant data, much as those building web-based services need to know the protocols of the Web in order to participate in the network economy. This suggests that open data should not be viewed as the endgame, but a necessary precondition for something far more radical.

The necessary transition, however, is unlikely to occur naturally: just as the Web is only open because its designers created an open model, so creating an open platform for democratic governance would need to be a deliberate decision. And evidence suggests that politicians will resist greater openness and transparency, since it would require them to give up some of their traditional power and authority.

In this regard perhaps open government advocates have been somewhat naïve. When in 2008 Obama won the US presidential election, for instance, they assumed the battle had been won. Arguing that Obama's success was because his campaign had adopted the open source approach to electioneering pioneered by Howard Dean in 2004 (under the guiding hand of Joe Trippi) they anticipated that Obama introduce the same openness in the White House, and so revolutionise the way in which presidents govern in the US.

In fact, Obama appeared to promise as much. As he put it during the election campaign: "We must use all available technologies and methods to open up the federal government, creating a new level of transparency to change the way business is conducted in Washington, and giving Americans the chance to participate in government deliberations and decision making in ways that were not possible only a few years ago."

With that end in mind, Obama promised to post pending legislation online for comments. And on entering the White House he put his weekly addresses up on YouTube, oversaw the creation of a White House blog, and the launch of several federal web sites — including Data.gov — where the public could access "high value, machine readable datasets generated by the Executive Branch of the Federal Government".

Clearly open data was viewed as a given in a Government 2.0 environment.

Caught up in the excitement, Newsweek called it Government 2.0: "Instead of a one-way system in which government hands down laws and provides services to citizens, why not use the Internet to let citizens, corporations and civil organisation work together with elected officials to develop solutions?" it asked.

Within weeks of Obama's arrival in the White House, however, the promised transparency began to give way to traditional top-down government, secrecy, and back-room deals.

Earlier this year, for instance, Obama was accused of reneging on a promise to make his health care negotiations freely available on C-SPAN (The Cable-Satellite Public Affairs Network, also freely available on the Internet).

And a year after the launch of Data.gov the site was being derided for offering little more than "thoughtless data dumps".

In reality, concluded cynics, Obama had simply paid lip service to openness and transparency in order to gain power.

However, it is surely more complicated than that. As O'Reilly points out, "we can be misled by the notion of participation to think that it's limited to having government decision-makers 'get input' from citizens … It's a trap for outsiders to think that Government 2.0 is a way to use new technology to amplify the voices of citizens to influence those in power, and by insiders as a way to harness and channel those voices to advance their causes."

In the meantime, the gap between promise and delivery in the Obama administration appears to be widening. Earlier this month, for instance, the New York Times ran an article suggesting that the Obama administration is actually proving more, not less, secretive than previous ones — and cracking down on government leaks in an unprecedented manner. A recent case [in which an intelligence bureaucrat faces 10 felony charges for handing over classified documents to a blogger], the paper suggested, "epitomizes the politically charged debate over secrecy and democracy."

It added, "In 17 months in office, President Obama has already outdone every previous president in pursuing leak prosecutions. His administration has taken actions that might have provoked sharp political criticism for his predecessor, George W. Bush."

Dangerous times

What has gone wrong? In pondering on this question earlier this year the online magazine Slate listed a number of possible reasons — including the White House's use of poor technology, an inability to let go of the traditional top-down approach and, citing an article by Micah Sifry — the co-founder of the annual tech/politics conference Personal Democracy Forum — the possibility that much of the hype surrounding the Obama campaign's web savvy was hype and nothing else.

The disappointment of open government advocates was palpable at the 2010 Personal Democracy Forum earlier this month. Indeed, some have become sufficiently disenchanted that they have come to conclude that, rather than facilitating greater democracy, the Web is acting against it. For this reason, suggested co-founder of the Electronic Frontier Foundation (EFF) John Perry Barlow, those disappointed in Obama should blame the Internet, not the president.

"The political system has partly broken because of the Internet," he argued. "It's made it impossible to govern anything the size of the nation-state". As a consequence, he predicted, "We're going back to the city-state. The nation-state is ungovernably information-rich."

Those not steeped in the culture of American libertarianism will doubtless view Barlow's conclusion as little more than the dystopian fantasy of a cyber-guru who had expected so much more of the Internet. Certainly it runs counter to everything we have learned about the ability of the Web to cope with scale.

So what do we conclude? Is open data really only ever about job creation and innovation? Should we give up on the democratic potential of the Internet? Can we afford to?

Undoubtedly it is possible that governments could release more of their data without making a significant change to the traditional political system. Certainly we cannot rely on them to act in the best interests of their electorate; and so we cannot rely on them to become more transparent, or at least not without a great deal of pressure — much as British politicians had to be forced to change the way in which they managed their expenses system.

But as public cynicism grows the need for citizens to be brought into the political process in a more meaningful way could become pressing. Today's growing disenchanted with governments and politicians is a dangerous development — and comes at a critical historical moment.

For as governments around the world are forced to cut spending and raise taxes in response to the global financial crisis, citizens could become sufficiently alienated that political unrest begins to destabilise democratic governments.

Already we have seen pressure placed on the political stability of Greece, signs of incipient civil unrest in a number of other European countries (e.g. Germany, Spain, Portugal, Italy, and doubtless soon in the UK in the wake of the new government's emergency budget). And the now routine protests at international forums like G-20 are a further reminder of how more and more citizens are becoming alienated from the political process.

The threat that the financial crisis poses to European democracies was highlighted recently by the President of the European Commission José Manuel Durão Barroso — who warned that crisis-hit countries in southern Europe could fall victim to military coups or popular uprisings as interest rates soar and public services collapse because their governments run out of money.

In short, these are dangerous times. So we cannot afford to retreat into Barlowesque libertarian fantasies. Rather we should be pointing out that opening up government databases is just the first necessary step for engaging citizens more effectively in the political decision-making process. After all, when people are involved in decision making they are more likely to buy into the proposed solutions — and that surely is the raison d'être for collective decision making.

Moreover, any solution emerging from citizens themselves is more likely to be a robust one. As Trippi points out in his book The Revolution will not be Televised (describing the Dean Campaign) some of the best ideas adopted by Dean came not from campaign insiders, but from ordinary citizens. "It became pretty obvious quickly that a couple of dozen sleep-deprived political junkies couldn't possibly match the brainpower and resourcefulness of six hundred thousand Americans," explains Trippi. "We couldn't see every hole and every flaw that they could see."

Likewise, describing Dean's radical decision to post details of the campaign's fund-raising on the Web, Trippi explains that the logic was to, "[I]nvite people in and open up the books. Give them the knowledge and information — how much money we wanted to raise — and they'd take the responsibility for it." And, he adds, they did take responsibility — as people do when you stop infantilising them.

Yes, governments should make their databases freely available to citizens. Yes, new businesses and services will doubtless be created as a result. Yes, that will surely benefit society at large. But open data should be viewed as merely a preliminary to a more far-reaching change in the way democracies are governed. The more important task is to re-engage citizens in the political process, and empower them to take part in collective decision making. That is not possible without open data, but open data alone will not make it happen.

In short, open data is a good cause to advocate, and Fioretti's initiative is both important and timely. But we need to broaden the discussion. It is not a topic that should be viewed exclusively through the lens of economics and entrepreneurship.

As such, the message to governments should be "Free our data: For democracy's sake".