Developments in the internet are enabling new approaches to access, peer review and even changing the ways our brains work. Sian Harris reports back from the STM Frankfurt meeting
‘We are still thinking of this internet revolution in terms of a 1998/99 Bill Gates vision,’ stated Frank Schirrmacher, publisher of the German newspaper Frankfurter Allgemeine Zeitung and keynote speaker at the STM Frankfurt conference in October.
Schirrmacher believes that the internet has been transformed by the use of algorithms. ‘With algorithms it’s possible to get meaning out of everything,’ he said, giving the example of Google News as a very powerful tool. It is often used as a source by news rooms but across Europe it is only staffed by three people. All the rest is done by algorithms, he said. And this approach could lead to bias. For example, recently Google executive chairman Eric Schmidt jokingly revealed a potential bias in that one of the Google developers on the product loves cricket – and Schirrmacher commented that coverage of the sport in Germany far surpasses the national level of interest in it.
Of course, hearing about a sport you have no particular interest in is nothing more than slightly annoying but there is a potentially more sinister side to it. ‘How information distributes itself today depends on algorithms,’ he said. ‘It is possible that a major search engine, by switching a tiny bit of code, could make certain products or even whole companies almost impossible to find.’
There is another threat from new technology too: it’s changing our brains. Schirrmacher likened the challenge to the early days of the industrial revolution when people who had been engaged in manual labour found themselves instead operating machines all day; after a while the workers began to suffer from fatigue because their muscles were not getting exercise.
Similarly, studies have revealed that digital communication is changing our brains, with a bigger focus on the short-term reward part of our brains and a weakening of the long-term reward part. ‘Digital devices are changing deep brain processes very deeply. Forgetfulness seems to be the new fatigue of our age,’ he observed.
In the days of the industrial revolution, the new phenomenon of fatigue was tackled by bringing in sport in schools and opening gyms to encourage exercise. ‘The best way to compensate for the challenges of the digital age,’ Schirrmacher concluded, ‘will be deep reading and writing things by hand with pen and paper.’
Opening access sustainably
Part of the picture of being able to do deep reading is access to content so it was little surprise that the topic of open access (OA) was again up for discussion at the STM meeting. IOP’s managing director Steven Hall likened traditional publishers’ relationships with OA to the stages of grief. He noted that most have moved on from the denial stage, although some are still at the anger stage, and that the next stage was bargaining, to try to buy some time:
‘We as an industry got this very wrong by facilitating green OA while resisting gold,’ he argued. ‘We chose not being paid for our work over being paid and said that our role was simply doing things like type setting, rather than in managing the peer-review process etc.’
He did not think there was a need for publishers to fall into a depression stage that it will be the ‘end of publishing as we know it’ though.
‘I believe if we engage effectively with OA we’ll still have food,’ he said. ‘We must engage with funding agencies that are pushing for OA otherwise they’ll force their own agenda. We can manage the transition from library pays to funder pays but it will require close engagement and we need to articulate clearly the services we provide.’
Part of this involves working with the repository model but having clearly-delineated policies for different versions. This might mean, for example, allowing the author’s original version to be posted with no embargo but with no journal name attached. Another issue is for publishers to work to assure libraries and researchers that they are not double dipping – taking subscription fees at the same level even if a significant proportion of the articles in a journal become OA.
Daniel Hulls, director of Cambridge Economic Policy Associates, spoke about a recent study by Research Information Network (RIN) in the UK into OA, which, he said, revealed a pretty clear favoured option: gold OA.
‘It’s the only sustainable OA model,’ he said, although he cautioned that the study was done for the UK and the economic factors involved in OA depend considerably on whether a country is a net exporter or importer of scholarly papers.
And there are further potential complications too, as Michael Jubb, director of RIN, revealed. ‘The attractiveness of gold depends critically on price,’ he observed.
In addition, Jubb continued, ‘researcher funders have created a system where higher-education funding doesn’t really flow into OA but into infrastructure, including libraries, and into direct costs of research projects. At the moment researchers are squirreling away money from research grants for OA publishing but that’s not infinitely scalable.’
Another complication he noted was that the distinction between green and gold OA is not as clear as it might seem, pointing out the relationship between the Wellcome Trust, which provides funding in the UK, and the UK PubMed Central repository. ‘Publishers need to be aware of funders’ motivations as well as those of researchers,’ he noted.
Next year, the results of the Europe-wide PEER project (Publishing and the Ecology of European Research) will be released. The project has been looking at the effects on publishers and researchers of wide-scale deposit of scholarly materials in repositories – as well as tackling green OA challenges such as non uniformity of publisher outputs and varying requirements by repository.
One intriguing observation – and one that could disappoint advocates of green OA – is the extremely low rate of author deposits. Of the around 53,000 articles deposited into the repositories taking part in the study until October 2011, only 170 papers were deposited by the authors themselves; the bulk were deposited by the participating publishers.
‘Maybe [the low take-up] was just because it was a project but authors did receive specific invitations to deposit and still didn’t do so,’ observed Chris Armbruster, research manager of the PEER Project.
From PEER to peer review
Another often-debated topic for scholarly publishers is that of peer review. As Adrian Mulligan, deputy director, research and academic relations at Elsevier pointed out, peer review is a large-scale operation: ‘In 2009 there were 1.4 million research articles published in peer-reviewed journals – that’s around one every 22 seconds – and every review takes two to four hours.’
Elsevier has been working with Sense about Science to survey researchers about their attitudes to peer review. The partners found that 69 per cent are very satisfied or satisfied with it.
The study also found that the approach to peer review that people perceive as most effective is double-blind – where neither the authors nor the reviewers know each other’s identities.
In contrast, it found that the researchers surveyed are less likely to submit papers to journals with open review systems and are also less likely to agree to review papers for those journals.
Such observations have been backed up by other studies. For example, a randomised experiment by the BMJ Group found that 55 per cent of reviewers declined to review if their report would be published with the article and that papers took longer to review if the reviews were intended to be published, said Mulligan.
And the 2006 Nature Publishing Group (NPG) study into open peer review attracted a high-level of general interest but a low uptake of authors agreeing to participate and very few technical comments.
Post-publication peer review has also been investigated in various studies, for example in a 2010 study by NPG. ‘Post-publication commenting is mostly content-free,’ observed Karl Ziemelis, chief physical sciences editor of Nature at the meeting.
Interestingly, however, a recent article in the Guardian newspaper that was highly critical of the peer-review process and scholarly publishers, and illustrated this with an example of a scholarly paper that the newspaper article’s author saw as inadequate, attracted a large number of detailed reader comments.
Many of the comments were about the peer-review process but many were also about the merits or otherwise of the scientific methods employed in the cited study – the kind of discussions that scientific publishers hope for on journal articles.
Despite concerns about open peer review, there are some major publications that are finding success with alternative models. The submission guidelines of the journal PLoS ONE, for example, says: ‘PLoS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership.’
As Mark Patterson, director of publishing at PLoS explained, ‘Editors ask if the science is rigorous, ethical, properly reported and the conclusions are backed by data but they don’t ask if it’s important. The journal has seen very steady growth and has a lot of content and citations and this model has been emulated by other publishers.’
One of the major criticisms of the peer-review process from researchers is the length of time it takes. So what of some of the approaches to help speed this up?
One big project to tackle this is the Neuroscience Peer Review Consortium. The aim of this is to help streamline the peer-review process by journals working together and sharing review reports to save the same paper being reviewed several times by different journals, possibly by the same reviewer. Despite the possibilities of saving time for reviewers and publishing papers more quickly, less than five per cent of manuscripts are forwarded to another consortium member, according to Mulligan of Elsevier.
Karl Ziemelis of Nature noted that sharing reports across journals and even across-publishers happens both formally and informally. However, he said, ‘many authors prefer papers to be re-reviewed even though those that are shared are done faster and often have higher acceptance rates.’
This comment reveals some of the tensions beneath many of the plans to transform access and the publishing process more generally. Although scholarly publishers face many pressures to evolve, in many cases researchers are far more conservative about changes than the publishers themselves.