What's next for peer review?

Share this on social media:

Six industry figures give their take on recent developments in the world of peer review, and some predictions for the years ahead

Rebecca Lawrence of F1000 asks: who is peer review for?

The pressures on authors to seek publication in high-impact factor journals leads to some behaving badly, as exemplified by the increasing number of peer review fraud and rings being exposed recently.

Equally, the pressure to be first out with a major finding causes some referees to be overly critical or hold papers up, as they are often natural competitors with the authors. Some referees may also need to justify to the editor that they did a good job, leading to excessive requests for additional data and experiments.

In a world where anonymous pre-publication peer review is the norm, there are minimal consequences for the referee for any bad or inappropriate behaviour – nor is there any credit or recognition for good behaviour.  

This is why the movement towards open peer review (i.e. the referee is named and the review is published) is so important as it means the referee has to stand by what they say; studies have found, as we do on F1000Research, that open reviews are of comparable quality if not better quality, and also often more constructive.

The referee can also get credit for the important work and contribution they have made to that article, especially when the reviews are made citable and included in ORCID profiles. I find it disappointing that so many journals that are experimenting with open review are doing so half-heartedly. Optional open peer review is really little better than closed review, as the referees generally only opt to name themselves when they are positive.

However, this still misses the point: who is peer review for? It should really be to help authors improve the presentation and interpretation of their work to maximise the potential for future research to build on it, and assure the scientific community, policy makers and public of the quality of the scientific findings. Peer review should be conducted in the open, once the findings have first been made public. It should be driven by the authors (but mediated by the publisher) who identify suitable referees from an agreed list of experts, and who decide openly how and when they address criticisms.

Fully open peer review has other benefits; we increasingly hear researchers say that when they find an article on F1000Research, one of the first things they do after looking at the title and abstract is to see who the referees are and what they said, before deciding whether to read the article.

Kaveh Bazargan of River Valley Technologies sees a trend towards integration between different systems

Peer review systems have been based on authors submitting files (e.g. Word, PDF) through an online system. The basic system has not changed much over the years, and there have been complaints that interacting with the systems is time consuming.

It is useful to sum up what different stakeholders are looking for in the ideal submission and peer review system. Let us start with the publishers, who want:

  • Cheaper platforms, with less dependency of cost on the number of submissions, and a lower entry cost for new journals;
  • Simple to use, intuitive interfaces, using modern web technologies – why should staff go on training courses to use a peer review system?; and
  • A single installation for all journals, allowing them, for example, to share reviewer databases, create global analytics, and move articles from one journal to another with a single click.

Peer reviewers want easy commenting in situ, not having to refer to a line number of a PDF file, or even annotating a PDF, and easy access to all manuscripts assigned to them for review. At the same time, authors want a quick, reliable and user-friendly interface, and the ability to submit to an alternative publisher if the first refuses.

In the past year there have been significant developments by mainstream system providers, such as a move to accepting submissions directly from online authoring systems, automatically importing metadata. Some new open access publishers have opted to use their own home-built systems, with very good feedback from authors and reviewers. The trend is towards more integration between disparate systems used in the publishing process.

What I see as likely developments in 2016 are:

  • Less dependency on uploading attachments from authors, and more integration of submission systems with online authoring platforms;
  • Further integration of the process ‘downstream’, allowing the peer reviewer to be just another user with a ‘role’ in the system. This integration will in principle allow post-publication as well as pre-publication peer review, or even both in the same workflow; and
  • Integration of reviewer comments into the XML of the publication, allowing reviewer comments to be visible or hidden, as decided by the publisher.

With more and more systems going into the ‘cloud’, it is now quite feasible to have a fully integrated end-to-end publication system, including full peer review, with the final deliverables, such as PDF files, generated automatically on a server.

Agnès Henri, Publishing Director at EDP Sciences says peer review is under scrutiny but is unavoidable

Peer review is the most valuable step for a journal, but it has never been under so much scrutiny, with a number of recent cases of misconduct leading several institutions to publish rules and to reaffirm the importance of ethics. Referees are over-solicited and it is common to ask up to seven referees to obtain just one report for a paper, leading some journals/publishers to be less vigilant.

Although imperfect, peer review remains the only option to evaluate and filter science.

EDP Sciences has always been concerned by peer review. Aware of the moving and challenging environment around peer review, we held a workshop in Paris in 2015. The purpose was to consider whether peer review has adapted to present day demands and constraints, and to consider and make recommendations to their partners about new practices and improvements that could be made to their procedures.

The meeting involved a committee of experts linked to the European Physical Journal. The workshop firstly confirmed the value of in-depth, quality peer review as an essential part of maintaining the highest possible scientific standards. The huge improvements in availability of online information only serve to emphasise further the need for this vital filtration service, managed by publishers. The quality of refereeing is seen as an important factor in attracting authors to publish in a journal, and it also motivates authors to write good quality papers.

How can we give referees more recognition?

  • By the publication of significant reviewer reports alongside articles; it would promote transparency of the process and give referees more recognition for their work;
  • By using the concept of portable peer review – whereby previous referee reports are shared when an article is rejected and submitted to another journal. This could represent significant improvements in service to the research community, by shortening processing times, and relieving workload pressures on referees. It should be done not only for journals from a single publisher, but with the collaboration of all publishers;
  • By extending ORCID integration to eliminate false/fabricated referees and integrate reports done by journals into a researcher’s personal record;

The quality check by the peer review process is essential and our reviewers are invaluable. Peer review is also a quality label that today is indispensable in order for green archives to exist.

Daniel Johnson of Publons says there is some cause for optimism over peer review

Musings on the future of peer review are usually pessimistic, as we extrapolate the trends of increasing article submissions, increasing researcher workloads, and rising cases of peer review fraud, and prophesise more doom and gloom.

The past year permits a bit of optimism, because 2015 was the year that credit for peer review came to the fore. This trend is important because many of peer review’s biggest issues stem from the complete lack of incentives for reviewers to put aside their own research to offer comprehensive, prompt peer review of the work of others. Give reviewers a stronger reason to help out, and they’ll do a better job.

In 2015 almost all the major publishers started experimenting with providing recognition for peer review. Wiley, Sage, Cambridge University Press, Royal Society, MDPI, eLife, the American Society for Microbiology, Emerald, and others have all launched pilots with Publons over the last year to reward and engage their reviewers, while Elsevier has also launched a reviewer profile product.

ORCID too announced support for peer review records (and a partnership with Publons) in October.

There was a similar explosion in the number of researchers getting credit for their peer review efforts over the past year. Surveys of Publons users show reviewers value being able to keep track of the manuscripts they reviewed, and value having this verified track record of reviewing to include in promotion applications. Related to the latter point, this past year also saw the University of Queensland become the first university to include verified peer review contributions in their research output management system.

The optimism that all of this activity is improving peer review is now gathering increasing support from the data.  Results from Publons’ pilots show participating journals experience an increase in review invitation acceptance rates and a decrease in review turnaround time. Giving credit for peer review has a strong, positive effect on the process.

The year ahead will undoubtedly see new cases of peer review scandals, new editorials calling for the death of peer review, and new frustrations with slow and inconsistent peer review. But it will also see a rapidly growing contingent of peer reviewers gaining recognition for prompt and comprehensive review.

Janne-Tuomas Seppänen of Peerage of Science sees change on the horizon

If I was asked to describe the current state of affairs in the peer review system, the answer would be easy: the same. Academic publishing does not go anywhere fast.

But change is afoot. When I first began to talk about new ideas for peer review, four short years ago, a common reply was: ‘you are trying to fix a problem that does not exist’. In 2015, an entire session at the ALPSP conference was devoted to the diversity of new approaches to peer review. My conversations are now about solutions rather than debates about the existence of problems.

Going to 2016, there is no longer ‘the’ peer review system to analyse, but many. Granted, some new developments are just hyped-up promotion of minor tweaks of optional details; for example ‘transparent peer review’ in Nature Communications, or the ‘Onymous’ option in Peerage of Science. But there are more profound developments too.

One important development is the emergence of mechanisms that seek to challenge, in one way or another, whether a peer review is right, and mechanisms that put some pressure on the peer reviewer to excel, rather than merely perform a chore.

For example, at eLife a non-anonymous cross-reviewer consultation round is a standard part of their process, and requires consensus decisions. Frontiers sees authors, reviewers and editor engaged in collaborative peer review, seeking consensus decisions. F1000Research takes things further, replacing the entire concept of acceptance-after-peer-review with what I would call conditional indexing: the reviewer verdicts are instantly posted online, with names, and ‘approved’ vs ‘not approved’ articles differ only in whether it’s indexed and searchable or not.

These new developments have received quite a lot of attention (and the inescapable ire of Jeffrey Beall and Kent Anderson). But it should be noted that the society-run journal Atmospheric Chemistry and Physics implemented very similar measures earlier, with less fanfare but with resounding success – it is the world’s premier publication in one of the most contested fields of science.

Another important development is growing agreement that reviewer recognition is needed. I agree. But on this issue the current state of affairs – tallying the number of contributions –  is, in a word, wrong.

Peer reviews are not created equal – some are brilliant pieces of science in their own right, but some are simply garbage. That’s a simple fact. If academic recognition is bestowed regardless, the result will be a worse peer review system.

John Hammersley, co-founder and CEO of Overleaf, wonders whether one individual service will ever cover the review process

Why do we review papers? In fact, why do we review anything?

In some cases the reason is obvious, especially when safety is involved – for example when cabin crew members cross-check the doors are sealed before every take off, the review eliminates a single point of failure.

For most things, however, the reasons are less obvious, and often serve a different purpose. Leaving aside for the moment the question of how well the current review system for academic research achieves any of these things, and accepting that there are far more perspectives than can be covered in a short article, the review process helps provide a way for readers to filter their content, helps authors improve the communication of their research, and helps funding bodies decide where to award research grants.

This process includes researchers who conduct the official peer reviews (whether pre- or post-publication)and supervisors and colleagues who help authors compose their research and set it into the context of other recent and historical research – as well as those who provide editorial and grammatical feedback, whether this be from friends or colleagues or from the editors involved in the publication of the article.

These differing perspectives are (one of the reasons) why it is hard to find a single, magic-bullet solution to the challenges in scientific communication discussed in other articles.

This isn’t stopping a lot of new ideas and new technologies taking on the challenge, including:

  • Content filtering and recommendation engines, such as Sparrho and Semantic Scholar;
  • Open, post-publication peer review platforms, such as F1000Research, ScienceOpen and The Winnower;
  • Independent peer review services, such as Peerage of Science, Rubriq and Axios Review (and related: Publons, which provides a credit mechanism for peer review); and
  • Discussion platforms and networks: ResearchGate, PubPeer, Academia.edu

These new technologies are helping with individual elements of the review process, but there is not yet a service that successfully covers all – and it’s not clear there could ever be one. What’s exciting is the increasing interoperability between these new technologies – for instance, how Publons allows authors to gain credit for peer reviews written on a different platforms – and how they can be used together to help meet differing needs.