Monday, March 28, 2005

The Open Wars

With the launch of Yahoo Search for Creative Commons we are reminded once again that in the digital networked world two different publishing models continue to battle for mindshare: one open, the other closed. Which is the more sustainable in the long term? And what relevance does the wider movement for open content have for the debate about Open Access?

Richard Poynder

Last week web search engine Yahoo launched the beta version of a new web search service for locating content made available under Creative Commons (CC) licences. The new service — Yahoo Search for Creative Commons — enables users to limit their web searches to CC content, including text, photographs, songs, web pages, articles etc.

The aim of the new service (and indeed of Creative Commons licences) is to help people distinguish between content that they cannot reuse or adapt without first obtaining the creator's permission, and content for which the creator has given prior permission for reuse.

Although it was already possible to conduct searches on CC content utilising Yahoo's advanced search functions, the search company has created a new interface designed specifically for doing so. In addition, users of the service can further refine their searches to locate CC material designated for specific types of reuse, including content that can be used for commercial purposes, and content that can be modified, adapted, or built upon.

The new service, explains Yahoo on its web site, is designed to help searchers find the works of authors that "have [been] marked as free to use with only 'some rights reserved.'" It adds: "If you respect the rights [these authors] have reserved (which will be clearly marked, as you'll see) then you can use the work without having to contact them and ask. In some cases, you may even find work in the public domain -- that is, free for any use with 'no rights reserved.'"

Significant impact

Launched in 2002, Creative Commons has had a significant impact on the Internet, and today there are an estimated 14 million web pages containing CC-licensed content.

The appeal of Creative Commons lies in the way it separates out the various rights associated with copyright and allows creators to specify those rights they want to keep and those they are happy to waive, but within a set of parameters that they themselves define.

This "some rights reserved" approach — coupled with the ability for creators to give blanket prior permission for certain uses — encourages the sharing and reuse of content, say CC advocates, making it far better suited to the open and co-operative ethos of the Web than the traditional copyright notion of "all rights reserved", and providing a greater stimulus to creative endeavour.

Creators can stipulate, for instance, that anyone can make whatever use they want of a work, so long as the author is credited; that they can make whatever use they want, provided it is not done for commercial purposes; that they can use it, but not create any derivate works, or verbatim copies; or the work can be offered on a “share-alike” basis — thereby allowing others to make derivative works, but only on condition that the resulting work is then distributed under the same share-alike terms. In total there are 11 CC licences.

Once they have chosen a licence that meets their needs, creators place a CC logo on their site. The logo also inserts machine-readable metadata into the HTML of their webpage linking it to a commons deed on Creative Commons' web site that sets out the usage conditions associated with the licence. As a result, both passing surfers and search engines can quickly establish what permissions apply to the content associated with the CC-licensed page.

While the Creative Commons web site already offers its own search engine this is restricted to searching on CC metadata. What Yahoo brings to the party, explains assistant director of Creative Commons Neeru Paharia, is that it also searches on backlinks in order to "figure out what license is associated with what page." In addition, she adds, Yahoo is "indexing a ton more pages than we are."

The expectation is that in bringing its formidable web presence to bear on CC-licensed content, Yahoo will increase both the credibility and visibility of Creative Commons. As the chair of Creative Commons Lawrence Lessig commented to ZDNet when the service was launched: "By giving users an easy way to find content based on the freedoms the author intends, Yahoo is encouraging the use and spread of technology that enables creators to build upon the creativity of others, legally,"

Open Access

What relevance, if any, does Yahoo Search for Creative Commons have for the Open Access (OA) movement? For OA publishers like BioMed Central (BMC) and the Public Library of Science (PLoS) the service is of immediate interest, since both publish under CC licences.

This means, for instance, that as all BMC's OA journals have CC rights information embedded in their articles BMC papers will be immediately visible to the new Yahoo service.

This, says BMC’s technical director Matthew Cockerill, is good news for OA. "Scientists make increasing uses of Internet search engines such as Yahoo to search the literature. The collaboration between Creative Commons and Yahoo is important as it means that scientists can now easily identify articles (such as those published by BioMed Central) that can be freely downloaded, redistributed and used to create derivative works. Possible applications of this include text mining, specialised subject specific databases, and digital archiving/preservation systems."

The new service will also be of great interest to scientists who publish with OA journals, adds Cockerill, since it will raise their visibility on the Web. "Scientists generally want the articles that they publish to be redistributed as widely as possible," he explains. "We hope that other search engines will follow Yahoo's lead, and perhaps go even further by highlighting Creative Commons content on their main search results listings."

For researchers who provide OA to their papers by self-archiving them the new service will have less immediate appeal, since self-archiving usually means continuing to publish in traditional subscription-based journals and then depositing the papers in an institutional repository. Since publishers routinely acquire the copyright in papers they publish self-archiving authors will not be able to archive them using a CC licence. As such, the papers will not be visible to the new Yahoo service.

However, some believe that the new service has little to offer anyone in the OA movement (or indeed content provides generally), certainly in the long-term. Joe Esposito, a management consultant specialising in the intersection of publishing and digital media, for instance, suggests that any benefits will be short-lived. "It temporarily raises something in the search rankings. But search is an arms race. The lead is soon lost."

Decrying the way in which the Creative Commons has become "caught up in a quasi-religious fervour that makes it hard to talk about without voices rising to a shout" Esposito characterises the Yahoo move as little more than a cunning marketing ploy, since it was already possible to search on CC content using the advanced search features of the main Yahoo search engine. "Yahoo wades into this religious controversy with all the cunning of a shrewd marketer, for which advocates of free enterprise will be pleased."

Two models

But whatever the impact of Yahoo's service its launch reminds us once again that in the digital networked world two very different content models continue to battle for mindshare: one open, the other closed. On one side of the barrier are those, like OA publishers, who believe that content must be made freely available. On the other sit those who continue to believe that readers must be made to pay the freight.

In the news business, for instance, a fierce debate continues to rage as to whether content can be charged for on the Web. Thus while an abundance of free news services are now available, powerful print brands like The Wall Street Journal and New York Times remain adamant that a closed "pay-to-read" model is the only effective way to maintain their "brand value" and ensure their survival in the long-term (although the NYT doesn’t lock its articles up until they are a week old).

However, critics point out (here and here for instance) that this "walled garden" approach means that publishers practising it are all but invisible on the open oceans of the Web, since search engines are unable to index closed sites. In the age of the Web — where openness and visibility are essential — they argue, this is a risky long-term strategy.

Similarly in the world of scholarly publishing Reed Elsevier continues to maintain that pay-to-read subscription–based publishing remains the only stable model for scholarly communication. OA publishers, by contrast, believe that authors (more accurately their funders) should pay to publish, thereby allowing the content to be made freely available on the Web.

Interestingly, Esposito does not dispute the need for more openness. He also believes that the greater willingness of OA publishers to adapt to the Web could see them outflank proprietary publishers. He does not, however, believe that alternative copyright licenses have any meaningful role to play here.

As he puts it: "I advise commercial clients to make content freely available, to syndicate content, to work through networks of resellers. Most of them refuse. If OA publishers are more aggressive about taking advantage of the inherent properties of the Web, then the OA publishers will outstrip the proprietary publishers. But this has nothing or very little to do with CC. It has to do with McLuhan: understanding the properties of a particular medium."

Maybe. But since digital content can so easily be copied it is no surprise that the struggle between open and closed publishing models is now firmly centred on issues of copyright. After all, on the Web walled gardens are so easily and so frequently breached that recourse to the law is hard to avoid in the proprietary model.

Refuge of last resort

Indeed, the boundary between acceptable and unacceptable use of content is often so fuzzy now that it is becoming difficult to avoid litigation, and copyright has become the refuge of last resort for proprietary content providers when shipwrecked on the Internet.

The launch of Yahoo Search for Creative Commons, after all, comes hot on the heels of news that fellow search engine Google has been sued by Agence France Press (AFP) for allegedly publishing copyrighted content without permission. Google News gathers photos and news stories from around the Web and posts them on its news site, which is free to users.

AFP is seeking more than $17 million in damages and an injunction barring Google from further publishing its photos, news headlines or story leads on the Google News service.

Here is clear proof — were it needed — that copyright has become one of the key areas of conflict on the Internet. And in providing a tool set to better enable content providers delineate their permissions Creative Commons is a practical response to the current situation.

But where do we go from here? Commenting on the new Yahoo service on his blog OA researcher Peter Suber says: "As copyright locks down more content more tightly, searchers will want reuse rights almost as much as relevance. Search engines that find both will have an advantage. Conversely, authors and publishers who consent to grant more reuse rights than fair-use alone already provides should make their consent machine-readable for the next generation of search engines."

In short, the hatches are being battened down for a new offensive in the online content business, and copyright has become the weapon of choice for those engaging in battle. What better way, then, to counter the increasingly aggressive "all rights reserved" battleships of proprietary publishers than to flood the seas with fleets of "some rights reserved" destroyers?

The scary thing for publishers is that it is currently difficult to see how the closed model can prevail. If it can't, then the challenge they face — be they scholarly publishers, newspapers, or lone bloggers trying to make a living from their writing — is to find a viable alternative business model in a world where readers are no longer prepared to pay. And it is in wrestling with this conundrum that the OA debate shares so much in common with the wider open content movement. It's the Open Wars. Get used to it!

What's your view: Can the closed model of publishing prevail on the Internet? What impact will Yahoo for Creative Commons have on the Web in the long term? And what relevance does the wider open content movement have for the debate about Open Access? E-mail me at richard.poynder@journalist.co.uk, or to comment publicly hit the comment button below.

Thursday, March 17, 2005

Time to Walk the Talk?

Berlin 3, held in the UK in March as a follow-up meeting for monitoring implementation of the 2003 Berlin Declaration, provided a timely opportunity to feel the pulse of the Open Access (OA) movement. How does the patient look? On paper, he looks good. Indeed, he turned out to be fitter than expected: instead of succumbing to factional disputes and bickering, delegates at the meeting agreed a short, very practical action plan for implementing the Declaration. But can the movement now follow through?

Richard Poynder

The OA movement has never been short on declarations, petitions and exhortations. In 2000 there was the Public Library of Science (PLoS); in 2002 the Budapest Open Access Initiative; and 2003 saw the Bethesda Statement and the Berlin Declaration. And these are just the better known asseverations.

When it comes to "walking the talk", however, the movement has been less successful. Today still only 5% of scholarly papers are published in OA journals, and only 15% of the estimated 2.5 million articles published annually are self-archived.

In short, despite the plethora of fine words and public statements calling on interested parties to "free the refereed literature" — and despite ten years of OA agitation and proselytizing – the vast majority of the world’s scholarly output remains firmly locked behind increasingly-expensive subscription firewalls. This, complain OA advocates, is holding back research, and threatens to slow the progress of science.

But could the OA tide be about to turn? Delegates at the Berlin 3 meeting dared to think so. "I was impressed with the amount of activity going on – specifically with institutional archives," says Barbara Kirsop of the Electronic Publishing Trust. "There seemed no consideration now about 'whether we should support OA', but just 'how can we better support OA'." Indeed, she concludes, the movement is now "unstoppable."

Certainly Berlin 3 provided a good opportunity to feel the pulse of the OA movement. The Berlin Declaration, after all, is now the primary flag around which many OA advocates rally, and it has a truly international membership, and focus.

The 55 signatories include international research institutions like CERN; large national research institutions like France's CNRS and Germany's Max-Planck Institutes, national Academies of Science in China, India and the Netherlands, and a wide a variety of individual universities and research funding agencies around the world. Delegates to Berlin 3 also came from Japan, Scandinavia and Italy.

Incongruous setting

The two-day event — one of the now bi-annual follow-up conferences for monitoring implementation of the 2003 Declaration — took place in the incongruous surroundings of an English Edwardian Manor just north of Southampton. While the location was certainly pleasant enough ("set amongst 12 acres of beautiful landscaped gardens", boasts the Manor's web site), obtaining an online connection was all but impossible — even when armed with a cell phone data card!

Day one began with a UK satellite session — an event that served to underline the degree to which the UK Science & Technology Select Committee Report and the publication of the final version of the US National Institutes of Health (NIH) policy on public access to research have shifted the emphasis of the OA movement. Where previously the stress was on the gold road to OA (in which researchers publish in new-style OA journals) today much greater weight is placed on the green road (where researchers continue publishing in traditional subscription-based journals but then self-archive the papers — either in a central subject-based repository or an institutional archive).

It was also quickly apparent that some progress is being made. Nottingham University's Bill Hubbard, for instance, explained that — despite the UK Government spurning the Select Committee's recommendation that it fund a network of institutional repositories (IRs) — those repositories are nevertheless being built. The 20 research universities belonging to the SHERPA consortium, for example, have all created their own IRs, and are now starting to focus on how they can ensure that they are filled.

It is clear, however, that creating an IR is the easy bit: filling it far harder. Primarily, Key Perspectives' Alma Swan explained to delegates, this is a question of ignorance. Surveys carried out by Key Perspectives, she explained, indicate that 78% of researchers who do not currently self-archive "are not aware of the possibility of providing open access to their work by self-archiving." Clearly there is a need for greater education and advocacy.

The good news, added Swan, is that were researchers mandated to self-archive, most would comply. 79% of those surveyed, she said, indicated that they would willingly self-archive if their institution told them they had to. She noted, however, that both the UK Government and the US NIH have chosen not to mandate anything.

But at least the UK Government's failure to act gave Derek Law, university librarian at the University of Strathclyde, an opportunity to boast that Scotland is ("as usual") way ahead of England. Thus while the British Government has chosen to sit on its hands over OA, Scotland has been busy developing its own national OA policy — a policy that will shortly see Scottish universities beginning to mandate their academics to self-archive their papers.

When the main conference began the keynote address was given by Tony Hey, director of e-Science. Talking on the theme of data-archiving and interoperability, Hey put the OA movement into the larger context of distributed global collaboration between scientists. It was a fascinating presentation, and emphasised how academic research — along with the large data collections that much scientific research depends upon — will increasingly need to be readily accessible to researchers if science is to progress at an acceptable rate.

The keystroke strategy

But it was day two that proved the more interesting. The day began with an upbeat presentation from Stevan Harnad, leading self-archiving advocate and professor of cognitive sciences at Southampton University, who explained to delegates the "Keystroke Strategy" OA policy that the University has introduced.

Conscious that institutions are confronted with a potent mix of ignorance and inertia when trying to fill their repositories (and in the light of the failure of government to take the initiative on OA) the Keystroke Strategy exploits the UK Research Assessment Exercise (RAE) — upon which hangs the fate of academic promotions and departmental budgets — as a tool with which to incentivise researchers to self-archive their papers.

Specifically, to ensure that their research is counted towards their RAE contribution Southampton academics must ensure that copies of their papers are keyed into the University archive. In effect, any research not archived will be "invisible” for RAE purposes.

The point, Harnad explained to delegates, is that the best way to encourage researchers to self-archive is to request they do it "not for the sake of Open Access, but for record-keeping and performance evaluation purposes."

Once a paper is available in the IR, added Harnad, it requires only a simple additional keystroke to make it OA. That final keystroke, he said, would make what was already visible to the institution visible to the rest of the world over the web; and the message to researchers is that “the Nth (OA) Keystroke is strongly encouraged (both for preprints and postprints) but is up to you.”

As the day progressed it became evident that while the UK may have been the main centre of debate about self-archiving, institutions in other countries are also proving effective at building and filling IRs. The manager of the Dutch SURF/DARE programme Leo Waaijers, for instance, explained to delegates how a consortium of the 12 principal Dutch universities has agreed an effective policy of institutional self-archiving.

Representatives from the French national research centres CNRS and INSERM, and from CERN, also outlined their institutional OA policies.

CERN's Joanne Yeomans gave a particularly stimulating presentation. Following the introduction of an institutional mandate, she reported, CERN estimates that around 60% of its research output is now made OA. Moreover, she added, this figure is expected to rise to 100% as a result of new initiatives shortly to be introduced.

In addition, explained Yeoman's colleague Jens Vigen, CERN has started to archive its historical research output. This, however, is not without its challenges, he added. Since researchers historically assigned copyright to the publishers, CERN has first to obtain their approval before scanning the papers. Unfortunately, he said, permission is not always forthcoming: both Elsevier and Wiley, for instance, have refused to allow a number of 50-year old papers produced by CERN researchers to be scanned — even though, in the case of Wiley, the papers are not provided online by the publisher!

In order to be able to provide OA "at source" CERN researchers are also being actively encouraged to publish in OA journals, reported Yeomans. In addition, she added, CERN is exploring ways in which it can support the start up of new OA journals — a reminder that while self-archiving has won the argument for now, many believe that in the long run the gold road offers a more effective and stable way of providing Open Access to scholarly research.

Highlight

But the highlight of Berlin 3 — and one which came like a bolt from the blue — occurred during the final plenary session. Intended simply as a forum to discuss, and possibly update, the wordy roadmap produced at Berlin 2 last May, the session quickly threatened to descend into a series of bad-tempered wrangles over issues like metadata, copyright, and distributed versus central archives.

At the last minute, however, delegates apparently concluded that, rather than fighting needlessly over minor issues, they could work together for the common good. By now it was also apparent that the roadmap was simply too lengthy and unfocused to provide an effective call to arms. In a remarkably short period of time, therefore, delegates agreed a short statement intended — in the words of the Max Planck Society's Georg Botz — to be "an implementation guide in a nutshell."

The wording agreed was:

"In order to implement the Berlin Declaration institutions should:

“1) implement a policy to require their researchers to deposit a copy of all their published articles in an open access repository, and

“2) encourage their researchers to publish their research articles in open access journals where a suitable journal exists and provide the support to enable that to happen."

That it proved possible in a very short time to agree two very practical measures that signatory institutions can take in order to enact the Declaration they had signed evidently provided a real fillip to delegates. "I was quite surprised," says Kirsop, "but this was good and shows that people wanted something more positive to come out of the meeting."

Even the normally rumbustious and argumentative Harnad was taken aback at such an apparently satisfactory outcome to the meeting. "The distillation into a short clear action plan had in fact not been on the formal agenda, and its adoption came as a total surprise," he says.

Follow Through?

Time will tell whether the statement agreed at Berlin 3 will prove to be a brief moment of harmony, and proactive intent, or just one more paper wish list. History certainly suggests that the latter could prove to be the case.

On the other hand, the progress reports from Holland, Scotland, and CERN — not to mention Southampton — suggest that the movement now has sufficient momentum to ensure that the Berlin 3 statement becomes a reality.

The question then, of course, becomes not whether but when. The fact that many of the Berlin Declaration signatories have yet to demonstrate a concrete commitment to the Declaration they signed suggests there may still be a long road to travel. On the other hand, since it had until Berlin 3 been completely unspecified exactly what they were meant to commit to, the concrete policy proposal now gives them a basis for acting — if they are minded so to do.

Key to what happens next will presumably be the fate of the statement. Is it now an official statement, and will it be integrated into the Berlin Declaration? Might it still be further edited?

For the moment, replies Botz, the statement "remains provisional." Moreover, he adds, it is not expected that the Berlin Declaration will be changed in any way "because that would require asking all signatories whether they agree."

Clearly the danger is that the signatories (particularly those who did not attend Berlin 3) may just ignore it — an outcome all the more possible given that the Berlin Declaration did not create any formal structure or organisation. Rather it acts merely as a consensus group. "Each organisation," explains Fred Friend, who chaired the plenary session, "takes the Berlin process forward according to the situation in its own environment."

Nevertheless, insists Friend, it would be wrong to conclude that this will lessen the impact of the statement agreed at Berlin 3. "Do not assume that because the statement is not a condition of being a signatory to the Berlin Declaration that it is only a wish and therefore ineffective," he says. "There are other examples of very effective statements from organisations without formal structures — for example, the International Coalition of Library Consortia, ICOLC, likewise has no formal structure but its various statements have been very influential in setting standards for licensing and usage of electronic content."

In fact there is absolutely no need to re-word the Berlin Declaration, says Harnad; nor is there any need for the original signatories to re-sign it. "It merely needs a 'commitment' rider which describes the new implementation policy and invites institutions to sign it, separately, to register their commitment to implementing the Berlin Declaration and to describe their own institutional self-archiving policy so other institutions worldwide can see how progress is being made and can emulate their example.

"This 'commitment' sign-up site already exists," he adds, "and institutions can already begin registering their commitment and their self-archiving policies." ***

Something concrete

PLoS' Andy Gass is also upbeat, arguing that following Berlin 3 the signatories now have both a declaration of intent and two concrete components to work with. As he puts it: "Presumably, all the institutions that signed the declaration did so with the intention of following through on their commitment to 'encourage[e] our researchers/grant recipients to publish their work according to the principles of the open access paradigm'."

Adds Gass: "Now that it's a bit more clear what that should mean in practice, it makes sense that the signatories should simply act on the recommendations of delegates to the follow-up meeting. The left hand and the right hand just need to come together."

Only time will tell if the Open Access movement has finally reached the point where it will "walk the talk". What concerns Harnad is that, until it does, valuable time is being unnecessarily lost, to the detriment of scientific progress.

"The danger is that we just keep signing declarations, scheduling meeting after meeting, and drafting a long, and increasingly complicated 'roadmap' — while daily/weekly/monthly access and impact just keep being lost. With the statement agreed at Berlin 3 we finally came up with something concrete that signatories can formally recommend, and commit to doing, and do. I hope they now do it."

What do you think? Is it time for the Open Access movement to walk the talk? Can it? Will it? E-mail your views to me at richard.poynder@journalist.co.uk, or to comment publicly hit the comment button below.

*** Shortly after this article was posted CNRS signed the "commitment".

(A report of the Berlin 3 meeting written by Stevan Harnad can be read in the March issue of D-Lib.)

Saturday, March 12, 2005

What is Open Access?

Richard Poynder

The aim of the Open Access (OA) movement, says Peter Suber's Open Access News, is to ensure that all "peer-reviewed scientific and scholarly literature" becomes available on the internet "free of charge and free of most copyright and licensing restrictions." The aim, he adds, is to remove "the barriers to serious research."

What does this mean in practice, and how can OA best be achieved? That has been the subject of frequent bitter dispute for at least the last ten years. Latterly the two main OA camps have consisted of those who promote the so-called Green Road (where researchers continue to publish in traditional subscription-based journals and then self-archive their papers on their personal web site or institutional repository) and those pushing the Gold Road (where researchers publish in new-style OA journals that charge a fee to publish papers, and then release them freely on the internet at the time of publication).

Until recently discussions about OA have tended to be dominated by the gold supporters. However, in the wake of the UK Science and Technology Committee Report into scientific publishing, and the release of the final version of the National Institutes of Health (NIH) policy on public access to NIH-funded research, the debate has shifted significantly, and the green approach now looks likely -- for the foreseeable future at least -- to set the agenda.

At the same time, most publishers now appear ready to embrace the inevitable. Certainly the announcement this week that the American Chemical Society will introduce two experimental policies, including one in which "as a value-added service to ACS authors and a method of further opening access to its content, the full-text version of all research articles published in ACS journals will be made available at no charge via an author-directed Web link 12 months after final publication", seems like a positive signal that both commercial and non-profit STM journal publishers now accept that publicly-funded research must be made freely available on the web.

The ACS announcement is significant since the ACS was one of a handful of remaining publishers that consistently refused to "go green" and allow author self-archiving (technically ACS was classified as a gray publisher).

On the surface it appears that the ACS has had a conversion. As ACS Publications Senior Vice President Brian Crawford commented "It is fundamental to the ACS mission to support and promote the research enterprise and to foster communication among its scientists. Providing unrestricted access via author-directed links 12 months after publication – in addition to the 50 free e-prints currently allowed during the first year of publication – reinforces that mission."

Undoubtedly the new ACS policy is a direct response to the NIH policy. As Crawford commented: "We understand that NIH-funded authors will wish to comply voluntarily with the NIH's policy request. By introducing this service, the ACS will take on the administrative burden of compliance and at the same time will ensure the integrity of the scientific literature by depositing the appropriate author version of the manuscript after peer-review."

The ACS announcement is also a clear marker that it wants to maintain control of the process. But is it a signal that a consensus on OA has finally begun to emerge, and can we now expect to see a smooth transition to OA? Or is the ACS move merely a cynical ploy to engineer a situation in which a 12-month embargo becomes not (as intended by the NIH) an outer boundary, but the norm?

Certainly the ACS has made the most of the watered down NIH policy. Thus where the initial NIH proposal had been to mandate NIH-funded researchers to make their papers available six months after publication, the final wording only "strongly encourages" grantees to authorise public release of their papers "as soon as possible" after publication, and at least within 12 months of publication.

Stevan Harnad, a leading proponent of the green cause, and author of The Subversive Proposal, is unimpressed with the ACS position, believing it to be an attempt to make a virtue out of necessity. Consider, he says:

"(1) 12 month Back Access is so inconsequential for revenue that many publishers already are or are planning to offer it anyway – nothing to do with OA. AAAS for example offered Back Access already 3 years ago. Shulenburger had already proposed it ("NEAR") way back in 1998!http://www.arl.org/arl/proceedings/133/shulenburger.html

"(2) But besides not being OA and not being particularly different from the status quo, offering 12-month Back Access in the name of satisfying the need and demand for OA in general, and OA self-archiving in particular is not an improvement but an entrenchment of what needs to be changed.

"In ordinary English," he adds, "access 12 months late is not very useful in itself and in any case already becoming the norm, but from a gray publisher it is actually a pretext for not going green, and as such, is no improvement at all. We should applaud partial steps only if they lead toward and increase the probability of 100% OA, not if they lead away from it!"

Even before the ACS announcement, the fear amongst OA advocates was that the NIH policy might make little difference to the progress of OA. It is widely believed, for instance, that a 12-month embargo is not only unnecessary but too restrictive to be classed as Open Access. As Peter Suber pointed out in the February issue of his SPARC Open Access Newsletter the "chief problem" with the final NIH policy is that "free online access could be delayed up to 12 months after publication. This is a significant delay, more serious in biomedicine than in most other fields. It will slow down research and slow down advances that promote public health."

But Harnad fears that the NIH policy is not only allowing gray publishers like the ACS to make a 12-month embargo the norm, but is encouraging green journals (which currently allow immediate self-archiving) to introduce embargoes where they did not previously exist.

This threat was evident in a January announcement from Nature Publishing Group (NPG), which stated that NPG planned to encourage its authors "to submit the author's version of the accepted, peer-reviewed manuscript to their relevant funding body's archive, for release six months after publication." In addition, NPG added "authors will also be encouraged to archive their version of the manuscript in their institution's repositories (as well as on their personal web sites), also six months after the original publication."

While OA advocates initially greeted the announcement with enthusiasm many quickly saw a sting in the tail. Since NPG had previously allowed self-archiving to be undertaken immediately on publication, news that it was now introducing a six-month embargo suggested that NPG was, in the words of Harnad, "Back-Sliding."

NPG's David Hoole denies this, insisting that the NPG's new policy is "a genuine attempt to extend archiving rights in the context of developments at the NIH and other funding bodies, and in the context of the Select Committee enquiry -- which recommended the development of institutional repositories. We wanted," he adds, "to be proactive, pragmatic, and involved."

Whatever the motives, says Harnad, it represents a step backwards. Moreover, he adds, the fundamental problem with embargoed access is that it is not OA, which implies "immediate, permanent, online access to the full-texts of peer-reviewed research journal articles, free for all users, webwide".

Publishers, however, appear to disagree. In an article in a recent issue of Serials Review, for instance, Chief Executive of the Association of Learned and Professional Society Publishers Sally Morris described OA as "free, unrestricted access (to primary research articles) for everyone." This description makes no mention of the need for "immediate" access.

One difficulty OA advocates face in trying to resist the growing embargo creep is that the Budapest Open Access Initiative (which many see as the genesis of the OA movement) did not itself specify immediate access — a point that Harnad concedes. This oversight, he adds, is a legacy of the movement’s long-standing over-emphasis of the gold road. “In its gold-centrism (where the immediacy comes with the territory) the OA movement has not been sufficiently explicit that green OA too must be immediate and permanent, not just peek-a-boo..."

So we are left with the question: what exactly is Open Access? And if OA does indeed imply immediate access, how should OA advocates respond to embargo creep?