Cause and effect

Share this on social media:

Chris Graf asks: what happens to research quality when we change the peer review and the research publishing model?

Complexity, change and co-existence now characterise even the most traditional and conservative things in research publishing, like peer review. This article raises questions about how researchers and publishers, working in the right sorts of collaboration, can maintain essential aspects of quality – namely integrity and ethics – and how they can find new kinds of value, together. 

Peer review, as part of the peer reviewed publishing process, is how we manage two essential aspects of quality in research and research publishing: integrity, and ethics. By integrity, we mean the reliability, reproducibility, trustworthiness and usefulness of published research. By ethics we mean the regulated ethical requirements for doing research (human and animal research, in particular), as well as equally important community-led obligations (like authorship practices), and how these are reflected and reported in published research. 

How are we doing with all that, you might ask? 

Often, people look to retractions as a marker for quality in research integrity and publishing ethics. Retractions, for the uninitiated, are formal withdrawals of research articles, published when something is significantly wrong with the integrity or the ethics of a piece of research. Retractions can be for honest errors, or for research misconduct, or for something in between. Jeffrey Brainard published an analysis of records from the world’s largest retraction database, titled Rethinking Retractions in Science.  

While Brainard reports that the numbers of retractions grew tenfold in the years between 2000 and 2014, he also reminds us that the total number is actually low (maybe four in every 10,000 articles published) and the number of articles published is also growing (doubling over a similar period). About 40 per cent of the retractions Brainard studied reported honest errors, problems with reproducibility, and other issues. The remainder were for our ‘something in between’ questionable research practices, or for misconduct. Brainard quotes Nick Steneck (University of Michigan in Ann Arbor): ‘Retractions have increased because editorial practices are improving, and journals are trying to encourage editors to take retractions seriously.’

Retractions are, Brainard, Steneck and many of us would argue, a sign of ‘quality’ in the research publishing process. They’re published when research publishers, using their ‘trade’ peer-reviewed publishing processes, curate (per their promise to the world) the research they publish to ensure it is a reliable as it can be. Retractions give us a sign that publishers are working with researchers when problems arise, either with integrity or with ethics (or with both), to address those problems in a robust and increasingly transparent way.

But our trad peer-reviewed publishing process is evolving, fast; completely new peer review models are emerging.

‘Author-mediated peer review’ is one quite profound evolution, akin to discussions for many years about post-publication peer review. Wellcome Open Research is a research publishing platform maintained by the Wellcome Trust. Authors submit their work to it, and after rapid quality checks and screening, including for our essential integrity and ethics qualities, the author’s research is published immediately.

After publication the author is incentivised to get their work peer reviewed (for example, only work that is peer reviewed is then indexed in PubMed Central and Europe PubMed Central). They pick and invite the reviewers. If the author fails to get it peer reviewed (and positively peer reviewed) then it likely sits on the platform (and probably doesn’t ever get read). And then, further, the authors choose whether or not  they address any points raised by the peer reviewers. Authors are totally in charge. 

This is a profound evolution: publication, and then possible peer review. By doing this Wellcome Open Research (and others adopting this approach, like F1000 Research) have re-imagined the traditional processes we’re used to relying on to govern quality. 

Community-mediated peer review takes things one step further into new territory. Right now, researchers can post their manuscripts to “preprint servers” where they can (almost immediately) create a permanent published, but not peer reviewed, record of their work. They might then choose to submit their work to a traditional journal, for peer-reviewed publication. 

A preprint server called arXiv is the world’s most established, and has been publishing actively for many years in physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. Many other preprint servers are emerging, often designed to serve research disciplines or sub-disciplines.

A good example is bioRxiv, the preprint server for biology. In general, preprint servers are seeing ‘hockey stick-like’ growth in their use (albeit in relatively small absolute numbers compared with the say 200,000 peer reviewed journal articles that are published every month in traditional journals). Reputable preprint servers do check preprints before publishing them (bioRvix says ‘all articles undergo a basic screening process for offensive and/or non-scientific content and for material that might pose a health or biosecurity risk and are checked for plagiarism’). They don’t do peer review. But they may enable the communities of researchers that use preprint servers to decide themselves to assemble, to peer review and offer comments on preprints, and thus again after publication to take care of quality – including integrity and ethics.

Publisher- and editor-mediated peer review, the traditional model that most journals use and that governs quality for most research articles, is not immune to changes. Look at the scale that some new journals are achieving (let’s take Nature Communications and PLOS One as examples of general journals publishing many thousands of articles per year; and Ecology and Evolution and Cancer Medicine as examples of specialist journals publishing many hundreds of articles per year). Each of these journals has achieved new kinds of scale, measured by the numbers of articles they are peer reviewing and publishing. And each has updated its editorial team and processes to handle that kind of scale. But each still uses a pretty traditional model for peer review, and governs integrity and ethics in the ways that we’re used to.

So, with evolving and completely new peer review models, we do need to ask ourselves: who looks after ethics, integrity – the most essential aspects of quality – now? And are we happy with how they’re doing it?

We’ll find out. 

Chris Graf is director for research integrity and publishing ethics at Wiley, and is co-chair of COPE, the Committee on Publication Ethics

This article is also published on The Wiley Network and in the journal Information Services & Use