Wednesday, December 28, 2016

Open access and Africa

In November I reported that PLOS CEO Elizabeth Marincola is leaving the open access publisher in order to take up a position as Senior Advisor for Science Communication and Advocacy at an African organisation. 

At the time, PLOS said it could not say exactly where Marincola was going as it had to wait until the organisation concerned had held its board meeting in December.

But last week Marincola confirmed to The Scientist that the organisation she will be joining is the African Academy of Sciences (AAS), based in Nairobi, Kenya. (I am not aware that PLOS itself has put out a press release on this). Marincola will be leaving PLOS at the end of the year (this week), with PLOS Chief Financial Officer Richard Hewitt serving as interim CEO from January 1st 2017.

We can surely assume that Marincola will be advocating strongly for open access in her new position at the AAS.

But where does this leave PLOS? I discussed this and the challenges I believe PLOS currently faces in November, but I was not able to get Marincola’s views. In a Q&A published yesterday, however, The Scientist asked Marincola where she saw PLOS’ place in today’s open-access publishing marketplace.

Marincola replied, “The first and primary mission of PLOS when it was founded was to make the case that open-access publishing could be a sustainable business, whether in a nonprofit environment or a for-profit environment. So the very fact we have a lot of competition now is extremely satisfying to us and it is, in itself, a major part of our vision. As Harold Varmus said when he cofounded PLOS, if we could put ourselves out of business because the whole world becomes open-access STM publishing, that would be the greatest testament to our achievements.”

Meanwhile at Elsevier


Marincola is not the only publisher to have developed an interest in open access, in Africa, and in the African Academy of Sciences. In 2014 Elsevier announced that it was partnering with AAS to support researchers by means of a publishing training programme. This, it said, would include offering access to Elsevier Publishing Connect and providing support for hosting live, online webinars.

And last year SciDev.net reported that Elsevier is planning to launch a new African open access mega journal (presumably in the style of PLOS ONE). This would be free to readers, but authors and their organisations would have to pay to publish – although SciDev.net indicated that internal discussions were taking place over whether publishing fees should be waived for the first five years.

One of the organisations Elsevier was said to be working with in developing the mega journal is the AAS. The other partners in the group are the African Centre for Technology, the South African Medical Research Council and IBM Research-Africa.

SciDev.net anticipated that the new journal would be launched this year, with the first papers being published in 2017. If the journal is still planned, then presumably the launch date has slipped.

Clearly there is growing interest in promoting open access and OER in Africa. But some believe that the involvement of people and organisations from the Global North can be a mixed blessing, as they can end up setting the agenda in a way that is not conducive to local conditions. One African tweeter commented recently, “The agenda for, and lead in, African studies should be set by African scholars.”

The same sentiment is often expressed about publishing and publishers, especially when large for-profit companies like Elsevier get involved. In a blog post last year University of Cape Town OA advocate Eve Gray said of the planned new mega-journal: “Could this venture under the Elsevier banner provide the impact and prestige that the continent’s research has been so sadly lacking? Or could it be simply that it could provide a blank slate for Elsevier, experimenting in the face of market uncertainty?  Or, at its crudest, just a neo-colonial land-grab in the face of challenges in the markets that Elsevier dominates?”

Certainly as it confronts growing hostility in Europe (and German researchers face the new year without access to its journals as a result), Elsevier must be keen to develop new markets in other parts of the world.

But as always with open access and scholarly publishing there are no simple answers, nothing can be predicted, and opinion is invariably divided.

Postscript: I emailed the African Academy of Sciences and asked whether Marincola will be working on Elsevier's new mega-journal in any way. As of writing this, I have yet to receive a reply.

Tuesday, December 06, 2016

Tracking Trump


While many, many words have already been spilled on the manifold implications of the surprise win of Donald Trump in the US presidential elections, I am not aware that much has been written about what it might mean for Public Access, as Open Access is called in the context of research funded by the US Government.

I was therefore interested last week to receive a copy of the current issue of David Wojick’s Inside Public Access newsletter. Wojick has been tracking the US Public Access program for a while now, and the latest issue of his subscription newsletter looks at what the arrival of the Trump Administration might mean for the Program. Wojick agreed to let me publish an edited version of the issue, which can be read below.

Guest post by David Wojick 

The transition team


To begin with, the Trump Administration has gotten off to a very slow start. The transition team did very little work prior to the election, which is unusual. Federal funding is available to both major candidates as soon as they are nominated. Romney’s transition team spent a reported 8.9 million dollars before the election. The Trump team has spent very little.

The transition team has a lot to do. To begin with it is supposed to vet applicants and job holders for about 4,000 federal positions which are held “at the pleasure of the President.” About 1,000 of these positions require Senate approval, so the vetting is not trivial.

There is a transition team for each Cabinet Department and the major non-Cabinet agencies, like EPA and the SEC. In addition to vetting applicants, the teams are supposed to meet with the senior civil servants of each department and agency, to be briefed on how these huge and complex organizations actually operate. Something as small as Public Access may not be noticed.

Each team is also supposed to begin to formulate specific policies for their organization. Given how vague Trump has been on policy specifics, this may not be easy. Or it may mean that the teams have pretty broad latitude when it comes to specific agency policies. There seems to be little information as to who makes up each agency team, so their views on public access are unknown at this point.


Moreover, the head of the Energy Department transition team was recently replaced, which has to slow things down a bit. DOE has been a leader in developing the Public Access Program. But in the long run the fate of Public Access is in the hands of the Department and Agency heads, and their deputies, not the transition team. Science related nominations have yet to even be announced.

The Science Advisor and OSTP


Then there is the issue of OSTP and the 2013 Memorandum that created the Public Access Program. The Office of Science and Technology Policy is part of the Executive Office of the President. It is headed by the President’s Science Advisor.

At one extreme the Memo might simply be rescinded. President Obama issued a great many orders and executive memos, in direct defiance of the Republican led Congress. Many of these orders seem likely to be rescinded and Public Access might get caught in the wave and wiped out. Then too, Republicans tend to be pro-business and the publishers may well lobby against the Public Access Program.

On the other hand, a public access policy is relatively non-partisan, as well as being politically attractive. The new OSTP head might even decide to strengthen the program, especially because Trump is being labeled as anti-science by his opponents.

The OSTP situation is also quite fluid at this point. No Science Advisor has even been proposed yet, that I know of. The vast majority of academic scientists are Democrats. The last Republican president took a year in office before nominating a Science Advisor, and he was a Democrat.

The American science community is watching this issue very closely, even though the Science Advisor and OSTP have very little actual authority. The Public Access Program is really something of an exception in this regard, but it is after all largely an administrative program. In the interim, OSTP has over a hundred employees so it will keep operating. So will the Public Access Program if the Memo is not rescinded.

In fact, the slower the Trump people are in taking over, the longer the Government will be run by civil servants who will favor the status quo. This will be true of all the Departments and Agencies. The worst-case scenario would be if OSTP were eliminated altogether. There is some discussion of this, but it seems unlikely as a political strategy. It would be viewed as a direct attack on science and it has no upside.

In any case, given that their internal Public Access Programs are well established, the agencies could decide to continue them, absent the OSTP Memo, or even OSTP.

Funding


Then there is the funding issue. The Public Access Program is generally internally funded out of existing research budgets. If these are cut, then Public Access might be internally defunded.

Both the Trump people and the Congressional leaders are talking about cutting funding for certain research areas. A prominent example is NASA’s Earth Science Division, which grew significantly under President Obama. If funds are actually cut, rather than simply redirected, then Public Access might take a hit.


Innovation


On the other hand, every new Department and Agency head and staff will be looking for flashy new ideas, especially if they do not cost much. Public Access has a populist aspect, which is Trump’s theme, so it could well be presented this way.

The agency civil servants are missing a bet if they do not see this opportunity to pitch public access. “Science for everyone” is a central theme of open access. So is accelerating science and innovation, which fits into the “Making America great” slogan of the Trump campaign.

Congress


More deeply, Congress is likely to be unleashed, after many years of partisan gridlock. This may be far more important than what the new Administration does. Congress controls the money and makes the laws and the lack of statutory authority for most agencies has been a vulnerability for Public Access.


In other words, while the OSTP Memo can be rescinded, a law is permanent (unless repealed of course). The US National Institutes of Health (NIH) introduced a mandatory Public Access Policy in 2008, but other agencies proved shy to follow its example, which is why we saw the OSTP Memo. This reluctance (along with a desire to provide Public Access with a more solid foundation) has also seen growing pressure for a statutory Public Access requirement for US Government departments.

Section 527 of the Consolidated Appropriations Act of 2014 required that the Departments of HHS, Education and Labor introduce a Public Access Program along the lines of the OSTP Memo. More importantly, the proposed Fair Access to Science and Technology Research (FASTR) Act is waiting in the wings.

FASTR would require that all US Government departments and agencies with annual extramural research expenditures of over $100 million make manuscripts of journal articles stemming from research funded by that agency publicly available over the Internet. First introduced in 2013, FASTR was reintroduced in 2015.

It is worth stressing that FASTR is a bipartisan bill, and was introduced to the Senate by Republican John Cornyn. As such, a Congressional mandate is well within reason.

 CHORUS


If the Public Access Program disappears then CHORUS will need to redirect its efforts. It already has several pilot efforts going in that direction. These include working with the Japanese Government and several US universities.


Conclusion


In short, interesting times lie ahead for the US Public Access Program, as the Trump Administration emerges and begins to act, along with the now unfettered Congress. Inside Public Access will be tracking this action.

__________________________________________________________________
Information about Inside Public Access can be accessed here.

David Wojick is an independent engineer, consultant and researcher with a Ph.D. in Philosophy of Science and a forty-year career in public policy. He has also written 30 articles for the Scholarly Kitchen, mostly on OA. From 2004 to 2014 Wojick was Senior Consultant on Innovation for the US Energy Department’s Office of Scientific and Technical Information (OSTI), a leader in public access.

Monday, November 21, 2016

PLOS CEO steps down as publisher embarks on “third revolution”

I HAVE POSTED AN UPDATE PIECE ON THIS HERE.


On 31st October, PLOS sent out a surprise tweet saying that its CEO Elizabeth Marincola is leaving the organisation for a new job in Kenya. Perhaps this is a good time to review the rise of PLOS, put some questions to the publisher, and consider its future.

PLOS started out in 2001 as an OA advocacy group. In 2003, however, it reinvented itself as an open access publisher and began to launch OA journals like PLOS Biology and PLOS Medicine. Its mission: “to accelerate progress in science and medicine by leading a transformation in research communication.” Above all, PLOS’ goal was to see all publicly-funded research made freely available on the internet.

Like all insurgent organisations, PLOS has over the years attracted both devoted fans and staunch critics. The fans (notably advocates for open access) relished the fact that PLOS had thrown down a gauntlet to legacy subscription publishers, and helped start the OA revolution. The critics have always insisted that a bunch of academics (PLOS’ founders) would never be able to make a fist of a publishing business.

At first, it seemed the critics might be right. One of the first scholarly publishers to attempt to build a business on article-processing charges (APCs), PLOS gambled that pay-to-publish would prove to be a viable business model. The critics demurred and said that in any case the level that PLOS had set its prices ($1,500) would prove woefully inadequate. Commenting to Nature in 2003, cell biologist Ira Mellman of Yale University, and editor of The Journal of Cell Biology, said. “I feel that PLOS’s estimate is low by four- to sixfold,”

In 2006, PLOS did increase the fees for its top two journals by 66% (to $2,500), and since then the figure has risen to $2,900. While this is neither a four- or sixfold increase, we must doubt that these prices would have been enough to make an organisation with PLOS’ ambitions viable. In 2008 Nature commented, “An analysis by Nature of the company’s accounts shows that PLOS still relies heavily on charity funding, and falls far short of its stated goal of quickly breaking even through its business model of charging authors a fee to publish in its journals. In the past financial year, ending 30 September 2007, its $6.68-million spending outstripped its revenue of $2.86 million.”

Wednesday, October 05, 2016

Institutional Repositories: Response to comments

The introduction I wrote for the recent Q&A with Clifford Lynch has attracted some commentary from the institutional repository (IR) and open access (OA) communities. I thank those who took the time to respond. After reading the comments the following questions occurred to me.



(A print version of this text is available here)


1.     Is the institutional repository dead or dying?

Judging by the Mark Twain quote with which COAR’s Kathleen Shearer headed her response (“The reports of our death have been greatly exaggerated”), and judging by CORE’s Nancy Pontika insisting in her comment that we should not give up on the IR (“It is my strong belief that we don’t need to abandon repositories”) people might conclude that I had said the IR is dead.

Indeed, by the time Shearer’s comments were republished on the OpenAIRE blog (under the title “COAR counters reports of repositories’ demise”) the wording had strengthened – Shearer was now saying that I had made a number of “somewhat questionable assertions, in particular that institutional repositories (IRs) have failed.”

That is not exactly what I said, although I did quote a blog post by Eric Van de Velde (here) in which he declared the IR obsolete. As he put it, “Its flawed foundation cannot be repaired. The IR must be phased out and replaced with viable alternatives.”

What I said (and about this Clifford Lynch seemed to agree, as do a growing number of others) is that it is time for the research community to take stock, and rethink what it hopes to achieve with the IR.

It is however correct to say I argued that green OA has “failed as a strategy”. And I do believe this. I gave some of the reasons why I do in my introduction, the most obvious of which is that green OA advocates assumed that once IRs were created they would quickly be filled by researchers self-archiving their work. Yet seventeen years after the Santa Fe meeting, and 22 years after Stevan Harnad began his long campaign to persuade researchers to self-archive, it is clear there remains little or no appetite for doing so, even though researchers are more than happy to post their papers on commercial sites like Academia.edu and ResearchGate.

However, I then went on to say that I saw two possible future scenarios for the IR. The first would see the research community “finally come together, agree on the appropriate role and purpose of the IR, and then implement a strategic plan that will see repositories filled with the target content (whatever it is deemed to be).”

The second scenario I envisaged was that the IR would be “captured by commercial publishers, much as open access itself is being captured by means of pay-to-publish gold OA.”

Neither of these scenarios assumes the IR will die, although they do envisage somewhat different futures for it. That said, what they could share in common is a propensity for the link between the IR and open access to weaken. Already we are seeing a growing number of papers in IRs being hidden behind login walls – either as a result of publisher embargoes or because many institutions have come to view the IR less as a way of making research freely available, more as a primary source of raw material for researcher evaluation and/or other internal processes. As IRs merge with Research Information Management (RIM) tools and Current Research Information Systems (CRIS) this darkening of the content in IRs could intensify.  

What makes this darkening likely is that the internal processes that IRs are starting to be used for generally only require the deposit of the metadata (bibliographic details) of papers, not the full-text. As such, the underlying documents may not just be inaccessible, but entirely absent.

This outcome seems even more likely in my second scenario. Here the IR is (so far as research articles are concerned) downgraded to the task of linking users to content hosted on publishers’ sites. Again, to fulfil such a role the IR need host only metadata.

2.     So what is the role of an institutional repository? What should be deposited in it, and for what purpose?

As I pointed out in my introduction, there is today no consensus on the role and purpose of the IR. Some see it as a platform for green OA, some view it as a journal publication platform, some as a metadata repository, some as a digital archive, some as a research data repository (I could go on).

It is worth noting here a comment posted on my blog by David Lowe. The reason why the IR will persist, he said, “is not related to OA publishing as such, but instead to ETDs.” Presumably this means that Lowe expects the primary role of the IR to become that of facilitating ETD workflows.

It turns out that ETDs are frequently locked behind login walls, as Joachim Schöpfel and Hélène Prost pointed out in a 2014 paper called Back to Grey: Disclosure and Concealment of Electronic Theses and Dissertations. “Our paper,” they wrote “describes a new and unexpected effect of the development of digital libraries and open access, as a paradoxical practice of hiding information from the scientific community and society, while partly sharing it with a restricted population (campus).”

And they concluded that the Internet “is not synonymous with openness, and the creation of institutional repositories and ETD workflows does not make all items more accessible and available. Sometimes, the new infrastructure even appears to increase barriers.”

In short, the roles that IRs are expected to play are now manifold and sometimes they are in conflict with one another. One consequence of this is that the link between the repository and open access could become more and more tenuous. Indeed, it is not beyond the bounds of possibility that the link could break altogether.

3.     To what extent can we say that the IR movement – and the OAI-PMH standard on which it was based – has proved successful, both in terms of interoperability and deposit levels?

As I said in my introduction, thousands of IRs have been created since 1999. That is undoubtedly an achievement. On the other hand, many of these repositories remain half empty, and for the reasons stated about we could see them increasingly being populated with metadata alone.

Both Shearer and Pontika agree that more could have been achieved with the IR. With regard to OAI-PMH Pontika says that while it has its disadvantages, “it has served the field well for quite some time now.”

But what does serving the field well mean in this context? Let’s recall that the main reason for holding the Santa Fe meeting, and for developing OAI-PMH, was to make IRs interoperable. And yet interoperability remains more aspiration than reality today. Perhaps for this reason most research papers are now located by means of commercial search engines and Google Scholar, not OAI-PMH harvesters – a point Shearer conceded when I interviewed her in 2014.

Of course, if running an IR becomes less about providing open access and more about enabling internal processes, or linking to papers hosted elsewhere, interoperability begins to seem unnecessary.

4.     Do IR advocates now accept that there is a need to re-think the institutional repository, and is the IR movement about to experience a great leap forward as a result?

Most IR advocates do appear to agree that it is time to review the current status of the institutional repository, and to rethink its role and purpose. And it is the Confederation of Open Access Repositories (COAR) that is leading on this.

“The calls for a fundamental rethink of repositories is already being answered!” Tony Ross-Hellauer –  scientific manager at OpenAIRE (a member of COAR) –  commented on my blog.  “See the ongoing work of the COAR next-generation repositories working group.”

Shearer, who is the executive director of COAR (and so presumably responsible for the working group), explains in her response that the group has set itself the task of identifying “the core functionalities for the next generation of repositories, as well as the architectures and technologies required to implement them.”

As a result, Shearer says, the IR community is “now well positioned to offer a viable alternative for an open and community led scholarly communication system.”

So all is well? Not everyone thinks so. As an anonymous commenter pointed out on my blog: “All this is not really offering a new way and more like reacting to the flow. Maybe that has to do with the kind of people working on it, the IR crowd is usually coming from the library field and their job is not to be inventive but to archive and keep stuff save.”

Archiving and keeping stuff save are very worthy missions, but it is to for-profit publishers that people tend to turn when they are looking for inventive solutions, and we can see that legacy publishers are now keen to move into the IR space. This suggests that if the goal is to create a community-led scholarly communications system COAR’s initiative could turn out to be a case of shutting the stable door after the horse has bolted.

5.     What is the most important task when seeking to engineer radical change in scholarly communication: articulating a vision, providing enabling technology, or getting community buy-in?

“Ultimately, what we are promoting is a conceptual model, not a technology,” says Shearer “Technologies will and must change over time, including repository technologies. We are calling for the scholarly community to take back control of the knowledge production process via a distributed network based at scholarly institutions around the world.”

Shearer adds that the following vision underlies COAR’s work:

“To position distributed repositories as the foundation of a globally networked infrastructure for scholarly communication that is collectively managed by the scholarly community. The resulting global repository network should have the potential to help transform the scholarly communication system by emphasizing the benefits of collective, open and distributed management, open content, uniform behaviors, real-time dissemination, and collective innovation.”

As such, I take it that COAR is seeking to facilitate the first scenario I outlined. But were not the above objectives those of the attendees of the 1999 Santa Fe meeting? Yet seventeen years later we are still waiting for them to be realised. Why might it be different this time around, especially now that legacy publishers are entering the market for IR services, and some universities seem minded to outsource the hosting of research papers to commercial organisations, rather than work with colleagues in the research community to create an interoperable network of distributed repositories?

What has also become apparent over the past 17 years is that open movements and initiatives focused on radical reform of scholarly communication tend to be long on impassioned calls, petitions and visions, short on collective action.

As NYU librarian April Hathcock put it when reporting on a Force11 Scholarly Commons Working Group she attended recently: “As several of my fellow librarian colleagues pointed out at the meeting, we tend to participate in conversations like this all the time and always with very similar results. The principles are fine, but to me, they’re nothing new or radical. They’re the same things we’ve been talking about for ages.”

Without doubt, articulating a vision is a good and necessary thing to do. But it can only take you so far. You also need enabling technology. And here we have learned that there is many a slip ‘twixt the cup and the lip.” OAI-PMH has not delivered on its promise, as even Herbert Van de Sompel, one of the architects of the protocol, appears to have concluded. (Although this tweet suggests that he too does not agree with the way I characterised the current state of the IR movement).

Shearer is of course right to say that technologies have to change over time. However, choosing the wrong one can at derail, or significantly slow down, the objective you are working towards.

But even if you have articulated a clear and desirable vision, and you have put the right technology in place, in the generally chaotic and anarchic world of scholarly communication you can only hope to achieve your objectives if you get community buy-in. That is what the IR and self-archiving movements have surely demonstrated.

6.     To what extent are commercial organisations colonising the IR landscape?

In my introduction I said that commercial publishers are now actively seeking to colonise and control the repository (a strategy supported by their parallel activities aimed at co-opting gold open access). As such, I said, the challenge the IR community faces is now much greater than in 1999.

In her response, Shearer says that I mischaracterise the situation. “[T]here are numerous examples of not-for-profit aggregators including BASE, CORE, SemanticScholar, CiteSeerX, OpenAIRE, LA Referencia and SHARE (I could go on),” she said. “These services index and provide access to a large set of articles, while also, in some cases, keeping a copy of the content.”

In fact, I did discuss non-profit services like BASE and OpenAIRE, as well as PubMed Central, HAL and SciELO. In doing so I pointed out that a high percentage of the large set of articles that Shearer refers to are not actually full-text documents, but metadata records. And of the full-text documents that are deposited, many are locked behind login walls. In the case of BASE, therefore, only around 60% of the records it indexes provide access to the full-text.

In addition, many consist of non-peer-reviewed and non-target content such as blog posts. Thats fine, but this is not the target content that OA advocates say they want to see made open access. Indeed, in some cases a record may consist of no more than a link to a link (e.g. see the first item listed here).

So the claims that these services make about indexing and providing access to a large set of articles need to be taken with a pinch of salt.

It is also important to note that publishers are at a significant advantage here, since they host and control access to the full-text of everything they publish. Moreover, they can provide access to the version of record (VoR) of articles. This is invariably the version that researchers want to read.

It also means that publishers can offer access both to OA papers as well as to paywalled papers, all through the same interface. And since they have the necessary funds to perfect the technology, publishers can offer more and better functionality, and a more user-friendly interface. For this reason, I suggested, they will soon (and indeed some already are) charging for services that index open content, as I assume Elsevier plans to do with the DataSearch service it is developing. This seems to me to be a new form of enclosure of the commons.

Shearer also took me to task for attaching too much significance to the partnership between Elsevier and the University of Florida – in which the University has agreed to outsource access to papers indexed in its repository to Elsevier. I suggested that by signing up to deals like this, universities will allow commercial publishers to increasingly control and marginalise IRs. This is an exaggeration, says Shearer “[O]ne repository does not make a trend.”

I agree that one swallow does not a summer make. However, summer does eventually arrive, and I anticipate that the agreement with the University of Florida will prove the first swallow of a hot summer. Other swallows will surely follow.

Consider, for instance, that the University of Florida has also signed a Letter of Agreement with CHORUS in a pilot initiative intended to scale up the Elsevier project “to a multilateral, industry effort.”

In addition to Elsevier, publishers involved in the pilot include the American Chemical Society, the American Physical Society, The Rockefeller University Press and Wiley. Other publishers will surely follow.

And just last week it was announced that Qatar University Library has signed a deal with Elsevier that apes the one signed by the University of Florida. I think we can see a trend in the making here.

As things stand, therefore, it is not clear to me how initiatives like COAR and SHARE can hope to match the collective power of legacy publishers working through CHORUS.

Let’s recall that OA advocates long argued that legacy publishers would never be able to replicate in an OA environment the dominance they have long enjoyed in the subscription world. As a result, it was said, as open access commodifies the services they provide publishers will experience a downward pressure on prices. In response, they will either have to downsize their operations, or get out of the publishing business altogether. Today we can see that legacy publishers are not only prospering in the OA environment, but getting ever richer as their profits rise – all at the expense of the taxpayer.

But let me be clear: while I fear that legacy publishers are going to co-opt both OA and IRs, I would much prefer they did not. Far better that the research community – with the help of non-profit concerns – succeeded in developing COAR’s “viable alternative for an open and community led scholarly communication system.”

So I applaud COAR’s initiative and absolutely sign up to its vision. My doubts are that, as things stand, that vision is unlikely to be realised. For it to happen I believe more dramatic changes would be needed than the OA and IR movements appear to assume, or are working towards.

7.     Will the IR movement, as with all such attempts by the research community to take back control of scholarly communication, inevitably fall victim to a collective action dilemma?

Let me here quote Van de Sompel, one of the key architects of OAI-PMH. Van de Sompel, I would add, has subsequently worked on OAI-ORE (which Lynch mentions in the Q&A) and on ResourceSync (which Shearer mentions in her critique).

In a retrospective on repository interoperability efforts published last year Van de Sompel concluded, “Over the years, we have learned that no one is ‘King of Scholarly Communication’ and that no progress regarding interoperability can be accomplished without active involvement and buy-in from the stakeholder communities. However, it is a significant challenge to determine what exactly the stakeholder communities are, and who can act as their representatives, when the target environment is as broad as all nodes involved in web-based scholarship. To put this differently, it is hard to know how to exactly start an effort to work towards increased interoperability.”

The larger problem here, of course, is the difficulties inherent in trying to get the research community to co-operate.

This is the problem that afflicts all attempts by the research community to, in Shearer’s words, “take back control of the knowledge production process.” What inevitably happens is that they bump up against what John Wenzler, Dean of Libraries California State University, has described as a “collective action dilemma”.

But what is the solution? Wenzler suggests the research community should focus on trying to control the costs of scholarly communication. Possible ways of doing this he says could include requiring pricing transparency and lobbying for government intervention and regulation. “[T]he government can try to limit a natural monopoly’s ability to exploit its customers by regulating its prices instead.”)

He concedes however: “Currently, the dominant political ideology in Western capitalist countries, especially in the United States, is hostile to regulation, and it would be difficult to convince politicians to impose prices on an industry that hasn’t been regulated in the past.”

He adds: “Moreover, even if some kind of International Publishing Committee were created to establish price rates, there is a chance that regulators would be captured by publisher interests.”

It is worth recalling that while OA advocates have successfully persuaded many governments to introduce open access/public access policies, this has not put control of the knowledge production process back into the hands of the research community, or reduced prices. Quite the reverse: it is (ironically) increasing the power and dominance of legacy publishers.  

In short, as things stand if you want to make a lot of money from the taxpayer you could do no better than become a scholarly publisher!

I don’t like being the eternal pessimist. I am convinced there must be a way of achieving the objectives of the open access and IR movements, and I believe it would be a good thing for that to happen. Before it can, however, these movements really need to acknowledge the degree to which their objectives are being undermined and waylaid by publishers. And rather than just repeating the same old mantras, and recycling the same visions, they need to come up with new and more compelling strategies for achieving their objectives. I don’t claim to know what the answer is, but I do know that time is not on the side of the research community here.

Thursday, September 22, 2016

Q&A with CNI’s Clifford Lynch: Time to re-think the institutional repository?

(A print version of this interview is available here)

Seventeen years ago 25 people gathered in Santa Fe, New Mexico, to discuss ways in which the growing number of e-print servers and digital repositories could be made interoperable. 

As scholarly archives and repositories had begun to proliferate a number of issues had arisen. There was a concern, for instance, that archives would needlessly replicate each other’s content, and that users would have to learn multiple interfaces in order to use them. 
Photo courtesy Susan van Hengstum
It was therefore felt there was a need to develop tools and protocols that would allow repositories to copy content from each other, and to work in concert on a distributed basis.
 

With this aim in mind those attending the New Mexico event – dubbed the Santa Fe Convention for the Open Archives Initiative (OAI) – agreed to create the (somewhat wordy) Open Archives Initiative Protocol for Metadata Harvesting, or OAI-PMH for short.

Key to the OAI-PMH approach was the notion that data providers – the individual archives – would be given easy-to-implement mechanisms for making information about what they held in their archives externally available. This external availability would then enable third-party service providers to build higher levels of functionality by using the metadata harvesting protocol.

The repository model that the organisers of the Santa Fe meeting had very much in mind was the physics preprint server arXiv This had been created in 1991 by physicist Paul Ginsparg, who was one of the attendees of the New Mexico meeting. As a result, the early focus of the initiative was on increasing the speed with which research papers were shared, and it was therefore assumed that the emphasis would be on archiving papers that had yet to be published (i.e. preprints).
 

However, amongst the Santa Fe attendees were a number of open access advocates. They saw OAI-PMH as a way of aggregating content hosted in local – rather than central – archives. And they envisaged that the archived content would be papers that had already been published, rather than preprints. These local archives later came to be known as institutional repositories, or IRs.

In other words, the OA advocates present were committed to the concept of author self-archiving (aka green open access). The objective for them was to encourage universities to create their own repositories and then instruct their researchers to deposit in them copies of all the papers they published in subscription journals. 

As these repositories would be on the open internet outside any paywall the papers would be freely available to all. And the expectation was that OAI-PMH would allow the content from all these local repositories to be aggregated into a single searchable virtual archive of (eventually) all published research.

Given these different perspectives there was inevitably some tension around the OAI from the beginning. And as the open access movement took off, and IRs proliferated, a number of other groups emerged, each with their own ideas about what the role and target content of institutional repositories should be. The resulting confusion continues to plague the IR landscape.

Moreover, today we can see that the interoperability promised by OAI-PMH has not really materialised, few third-party service providers have emerged, and content duplication has not been avoided. And to the exasperation of green OA advocates, author self-archiving has remained a minority sport, with researchers reluctant to take on the task of depositing their papers in their institutional repository. Given this, some believe the IR now faces an existential threat. 

In light of the challenging, volatile, but inherently interesting situation that IRs now find themselves in I decided recently to contact a few of the Santa Fe attendees and put some questions to them. My first two approaches were unsuccessful, but I struck third-time lucky when Clifford Lynch, director of the Washington-based Coalition for Networked Information (CNI), agreed to answer my questions.

I am publishing the resultant Q&A today. This can be accessed in the pdf file here.

As is my custom, I have prefaced the interview with a long introduction. However, those who only wish to read the Q&A need simply click on the link at the head of the file and go directly to it.