An ever-changing landscape

Share this on social media:

From artificial intelligence to transparency, peer review’s journey is full of twists and turns, reports Rebecca Pool

It’s been nearly two years since some 90 editors, publishers, funders and researchers gathered at the Howard Hughes Medical Institute to discuss innovations in peer review.

Organised by Wellcome, ASAPbio and HHMI, the meeting’s hot topics included transparency and recognition.

By the end of the event, the majority of participants had voted in favour of publishing all peer review content and author responses alongside final manuscripts, although not necessarily reviewer names.

Perceived benefits to the community included greater reviewer recognition, increased reviewer and editorial accountability and the opportunity for students and early career researchers to glean valuable knowledge on the peer review process.

Today, the topic of transparency in peer review is still burning brightly, with myriad publishers from BMJ to Elsevier and EMBO Press to MDPI committing to the process. Following a 2018 trial, editors Barbara Cheifet and Andrew Cosgrove of Springer Nature’s Genome Biology, claimed that transparent peer review hadn’t affected the speed of outcome of the peer review process and quickly adopted transparency. Just last year, PLOS announced that all of its journals were ‘open’ for published peer review.

But embracing transparency isn’t necessarily straightforward. Many publishers have faced a number of practical difficulties to adopt such models, hindered by complex and established journal workflows.

Given this, Publons and ScholarOne, both part of the Web Of Science Group and home to peer review and manuscript submission systems, joined forces with US-based publisher, Wiley, to develop a scalable, open peer review workflow to ease this process. The results have been promising.

As Tiago Barros, product leader at Publons, recalled: ‘Wiley has thousands of journals and was struggling to offer transparent peer review at scale.

‘[The publisher] was most used to operate single- or double-blind peer review with its systems tailored to this, so we worked with them to design a new workflow.’

‘Transparent Peer Review’ ensued, which is a Publons workflow with a ScholarOne API, and was initially applied to Wiley’s Clinical Genetics. The journal had been carrying out transparent review for several years, but processes were deemed to be labour-intensive for editors, with a low opt-in rate from authors.

However, as Barros (above) pointed out: ‘We launched our system at Peer Review Week last year, and within the first 120 days we got an opt-in rate of more than 80 per cent of 120 submissions. In the same period prior to this, only three to four articles had opted-in.’

Wiley was thrilled and has been rolling out the system to more journals ever since. Researchers can read reviews and scholarly discussion on Publons, which is accessible via a link from the published article. Reviewer reports, editor decision letters and author responses can be viewed and each element is assigned a digital object identifier (DOI) so readers can reference and cite content while reviewers receive recognition. The workflow also allows the reviewer to either sign reviews or remain anonymous.

‘Wiley has had no problems finding reviewers and, depending on the journal, we see that between a quarter and a third of reviewers choose to sign reports, so given the choice, the majority still remain anonymous,’ highlighted Barros. ‘But, as the number of journals with transparent review grows over time, and more and more reviewers become exposed to this option, then I believe this will definitely change.’

‘I think there was a concern amongst early-career researchers that negative comments on more senior researchers’ work could affect their careers,’ he added. ‘But experience has shown that these concerns are not warranted.’

Since Wiley, more publishers have adopted the Publons workflow, including Emerald Publishing Group, the IOP Publishing and The Royal Society. Barros, for one, is pleased with the diversity of publishers as well as reasons for turning to Transparent Peer Review.

‘For The Royal Society, this is about streamlining their existing process – its journal, Open Biology, is already open but this makes the peer review process more efficient and scaleable,’ he said. ‘Meanwhile, the IOPP wanted to give open peer review as an option to authors.’

Given the encouraging results so far, Barros is now looking forward to forming partnerships with more publishers and is hoping to develop systems further. Right now, reviews and scholarly discussion content are available in text only, but this could soon change.

‘We’d like to include content such as tables and figures to provide a richer experience for users and make sure the value of those contributions is not just reduced to plain text,’ said Barros. ‘Inline reviewing is another option that would allow reviewers to comment directly onto manuscripts instead of simply having text at the end of the manuscript.’

Taking a step back, Barros is confident that a transparent approach can also incentivise quality reviews and help to address industry issues such as predatory publishing. But as the scholarly publishing community heads further down the road to transparency, other industry developments are also set to make big differences to the ways in which peer review is performed.

An easier way?

In December 2018, open access digital publisher and proponent of collaborative peer review, Frontiers, integrated custom-developed artificial intelligence to its peer review platform, to ease manuscript handling and streamline the peer review process. So-called AIRA – artificial intelligence review assistance – is designed to analyse the quality of submitted manuscripts and the review process, as well as suggest actions and identify potential reviewers.

‘AIRA uses state-of-the-art algorithms to recognise patterns in manuscripts and also check conflicts of interest, reviewer expertise and patterns of behaviour,’ said Robyn Mugridge, publishing partnerships manager at Frontiers. ‘It will also check for, say, plagiarism and image manipulation, and while each manuscript gets a human check following AIRA, using this means editors aren’t wasting time on manuscripts that aren’t up to our scientific standards.

‘People worry about being replaced but that is not what AIRA does, as it is designed to support them through better decision making,’ she added.
Frontiers isn’t alone with its foray into the world of AI. More and more AI tools are being used by academic publishers to select reviewers and summarise manuscript findings, thereby saving editors, reviewers and researchers time.

For example, Reviewer Finder API, from Denmark-based UNSILO, was recently adopted by open access publisher PeerJ. Meanwhile US-based Aries Systems uses so-called StatReviewer in its peer review management system; the tool assesses the statistical integrity of a manuscript.

Like Mugridge, Barros is enthusiastic about the value of AI and sees more and more publishers turning to such tools. Publons is currently working with the Swiss National Science Foundation on using AI to assess peer review quality. And the organisation’s Publons Reviewer Connect also uses AI to cross-reference its peer review platform with Web of Science to recommend reviewers to editors.

As Barros said: ‘We have found that the biggest reason a reviewer rejects the opportunity to review is when the subject area is not relevant, but this really helps with matching peer reviewer to a manuscript.’

Mugridge (above) concurred and pointed out how Frontiers’ AIRA will scan manuscripts, identify key words and then scan databases that use information from the likes of Google Scholar and Scopus, to identify potential peer reviewers. Rates of reviewer decline have been low and a recent survey of Frontiers’ editors indicated that a mighty 87 per cent thought AIRA was useful and enables staff to make decisions more effectively.

Looking forward, Frontiers now intends to use AIRA to spot research trends. As Mugridge highlighted: ‘Using impact and altmetric data, as well as citations, AIRA can identify what makes a topic or an author trending in a research field... we’re working on this and it’s going to offer interesting insight.

‘We were born digital and open access, and as time has gone on we have seen that machine learning is going to play a huge role in publishing,’ she added. ‘We are using AI to improve our publications so they can stand the test of time.’

Yet, not all publishers are adopting AI right now, and UK-based Cambridge University Press, for one, has a different take on the topic. As Fiona Hutton, head of STM open access publishing and executive publisher, put it: ‘It’s not something that we have done... there is this strong idea in the press that editors and researchers in the field are the best people to approach for peer reviewers.’

‘But, I think it is an avenue to explore going forward – anything that can help to facilitate [the peer review process ] in a more intuitive and intelligent way will help the whole scholarly communication process,’ she adds.

And for Hutton (above), this is what it’s all about. As a former cancer researcher, she is only too aware that research isn’t, as she put it, ‘linear’. So with this in mind, she recently launched a new open access journal, Experimental Results.

The journal is publishing stand-alone experimental results viewed as inconclusive or negative as well as attempts to reproduce previously published experiments, including those that dispute past findings. With this, Hutton intends to address the thorny issue of research reproducibility and also cut peer review times and ease reviewer fatigue.

‘Some research has an obvious narrative, but research can also be messy and confusing, and some scientists actually have to create that narrative to justify their research,’ she said. ‘I wanted to produce something that is much more mirrored to what the research scientist does, and wanted to show the complexities and challenges that actually happen in research.’

‘Also, during peer review, the article is looked at, yet no-one repeats the experiment,’ she added. ‘This journal is providing a quick way for researchers in different laboratories to replicate experiments and publish whether or not they get the same results.’

But, not only is the journal very different, so is its peer review process. Given the output of the journal is small snippets of research, Hutton and colleagues developed so-called scorecards so peer reviewers could focus on, say, whether an experiment has been carried out correctly or if a piece of research answers a valid research question.

With this in mind, the scorecards comprise basic elements to help the reviewer to decide if the research is acceptable for publication. These are then followed by weighted options to provide a score. And in line with transparent review, each peer reviewer is identified by name with each review published alongside its article with a DOI.

‘When a scorecard is published, the reviewer will get credit, and with the DOI, he or she can collect that information as part of their academic record, and so collect the value of their time,’ said Hutton.

The journal launched in September last year, and articles are currently moving through the publication process. Hutton says she and colleagues have ambitions to make the time from author submission to publication ‘very quick’, and feedback to date has been positive.

‘We’ve have really wide engagement from life sciences to engineering to physics and astronomy, and our community is commenting on the real need to have a publication that can publish research that hasn’t yet seen the light of day,’ she said. ‘This approach is hitting true for a lot of subject areas, and in the beginning, we didn’t know if this would actually be the case.’

Hutton is confident that Experimental Results will prevent scientists from repeating experiments, wasting money, and save them precious time and effort. And in line with sentiments from Barros of Publons and Frontiers’ Mugridge, she also believes the publication will save peer reviewers precious time and deliver much-needed recognition.

‘Peer review is a huge part of researchers’ academic time and they are contributing to ensuring that their part of a research field in adequately peer reviewed. They really need to be credited for doing this as part of their normal career progression.

‘We’re finding that reviewers like [our system] as they are receiving credit for what they are putting in... and making sure researchers receive this is becoming very, very important,’ she concludes.

Case study – Openness, transparency and experimentation: learning from new approaches to peer review

 

Marc Gillett, head of publishing operations at IOP Publishing, explains how the publisher explored new peer-review options, and what the reaction was from the scientific community

 

Fair and rigorous peer review is vital for IOP Publishing, as it allows us to maintain the high quality of the research we publish. We are fortunate to work with more than 72,000 active reviewers who have completed more than 260,000 reports since 2013.

As well as ensuring quality, our peer-review operation aims to free up researchers’ time to focus on science. To help achieve this, more than 60 of our staff worldwide work on peer-review administration. They pre-screen submissions to filter irrelevant or unethical articles before peer review. This enables reviewers to focus on work that is valuable to the scientific community. In 2018, we estimate this saved reviewers more than 87,000 hours.

We are always listening to feedback from the scientific community, and they told us they were interested in new approaches to peer review. This prompted us to trial three different approaches.

The first was a trial of double-blind peer review, in which neither the reviewer nor the author knows the other’s identity. 

To gauge the level of interest and uptake from authors working in materials science and biomedical physics/engineering, in 2017 we introduced the option of double-blind peer review on two express journals – Materials Research Express and Biomedical Physics & Engineering Express. A year later, we expanded it to Plasma Research Express and in 2019 the New Journal of Physics and Engineering Research Express added a double-blind option for authors.

Our second project was providing peer-review information alongside every online publication. This tells readers which type of peer review the article underwent; the number of revisions before publication; and whether it was checked for originality. Publishing the peer-review information alongside a paper adds transparency to the process and increases readers’ confidence in published research. 

We were among the first publishers to introduce this, responding to a demand for increased transparency within the community. When we surveyed more than 11,500 researchers, a clear majority told us they wanted to know if a paper they were reading had been peer-reviewed.

The third approach we trialled is transparent peer review (TPR), which we introduced with the help of Publons on the journals JPhys Materials, Journal of Neural Engineering and Environmental Research Letters in 2019.  

TPR is very different from double-blind peer review. It shows the complete peer-review process from initial review to final decision, and has gained popularity with authors, reviewers and editors recently. 

It builds upon the principles of providing peer-review information with a published paper but adds some significant other elements. It shows readers a full peer-review history, including reviewer reports, editors’ decision letters and authors’ responses. Each element has its own DOI to help readers reference and cite the peer review content. Reviewers can choose to sign their reports or remain anonymous. 

Each of these new approaches taught us something.

On double-blind peer review, around a quarter of authors submitting to the pilot journals chose the double-blind option. When we surveyed those who chose double-blind, 95 per cent of respondents rated their experience of the double-blind process as eight out of 10 or higher. Meanwhile, 85 per cent said they chose double-blind because they considered it the fairest system of peer review.

It’s early days for our TPR pilot, but so far more than 55 per cent of authors chose TPR when submitting, and more than 40 per cent of reviewers opted in. (In our pilot, authors and reviewers must both opt-in for the peer review history to be displayed with the published article.) 

We think that transparency may increase the quality of the peer-review process and could aid teaching of best practice in peer review.

However, our work on peer review goes beyond trialling different models. We are also investing in how we manage it, with two new staff roles dedicated to supporting quality peer review.

Our research integrity and inclusion manager works with our author and reviewer communities on ethics, diversity and inclusion. They lead our efforts to increase the diversity of our reviewers, and head our in-house Diversity and Inclusion Publishing Group, as well as our separate Ethics Panel.

Meanwhile, our reviewer engagement manager focuses on our reviewer training programme, working closely with early career researchers to guide and support them throughout their reviewing career.

We are encouraged by the response to our projects so far. We will keep innovating and experimenting with our approaches to peer review, continually learning and improving what we offer to the scientific community.