Call for diversity

Share this on social media:

Topic tags: 

Be it traditional or transparent, peer review players from the West want more reviews from the East, reports Rebecca Pool

Ask Andrew Preston what impact his company, Publons, has had on the complex, changing world of peer review, and he quickly answers, ‘I don’t know if I am the right person to answer that question’.

Ask again, and his hopes for recognition and transparency become clear. ‘When we [he and Daniel Johnston] started out in 2012, we’d talk to publishers, and even researchers, about recognition and transparent review and we’d see very little interest.

‘But six years on, I would like to think our work has increased awareness and has also got the industry moving in a way that is really important,’ he adds.

His words follow the latest Publons Global State of Peer Review report; the largest study of this emotive system for research evaluation. Bought by Clarivate in 2017, Publons was able to combine the results of its survey of more than 11,000 researchers with data from Publons, Web of Science and ScholarOne manuscripts databases.

The results underlined the unequivocal importance of peer review to researchers, the over-riding need for review recognition – 84 per cent of respondents stated institutions should more explicitly require and recognise contributions – and a growing interest in review transparency.

Preston says he isn’t surprised by the widespread demand for recognition, given the rising profile of peer review among industry players. ‘You can recognise peer review now,’ he adds. ‘Before Publons and other [organisations and initiatives] came along it was difficult to prove what you had reviewed, but in the past five to six years, this has changed.’

And he also believes a shift to greater transparency in peer review is truly underway. ‘We found that younger, early-career researchers preferred the concept of transparent peer review far more [than mid- and late-career reviewers],’ he says. ‘Over the coming years we will see more and more transparent review, as those young researchers grow into their careers.’

However, diversity – or a lack of it – also featured prominently in the report, with results indicating that researchers from emerging regions, including China, Brazil, India and South Korea, are under-represented in the peer review process.

For example, in 2017, China produced 0.8 reviews per submission compared with an average of 2.3 reviews per submission for all established regions, including the US, UK and Japan. Yet at the same time, results indicated that journal editors are generally located in established regions, such as the US and UK, and as such, tend to invite reviewers from their own geographical regions. Preston is keen for this bias to be addressed, bluntly stating: ‘Geographical peer review disparity is harming the development of non-Western researchers.

‘Fewer review invitations mean fewer invitations to see the latest research trends, learn what journals are looking for in a great manuscript, make professional connections with journal editors and develop critical analytical skills,’ he adds.

Preston is hardly alone in his desire to increase diversity in peer review, be it related to geography, gender or age. The theme of the 2018 Peer Review Week was diversity and inclusion, and myriad scholarly publishers have been looking at the issue. For example, in June 2018, four BioMed Central journals launched a pilot to endorse peer review mentoring, with an aim to increasing diversity. Meanwhile, Elsevier’s Research Academy hosted a webinar with EASE and Sense about Science, that explored how to foster diversity in peer review.

Kristen Marchetti, director of global peer review at SAGE Publishing, is equally keen to advocate diversity and sums up widespread sentiment . ‘We welcome more diversity in the peer review community, as well as throughout the entire publishing and scholarly communication sector,’ she says. ‘Engaging with individuals from different backgrounds, connecting with early career researchers and achieving a good representation on gender and ethnic diversity will help with this.

‘This allows for different, unique perspectives to be heard, enriches the scientific body of knowledge and increases the relevance and engagement,’ she adds.
Still, as Marchetti admits, this is ‘easier said than done’. She highlights how SAGE aims to engage more early career researchers with its Reviewer Gateway and Reviewer Guide, which provide resources and instructions on peer review. ‘We are always looking at more effective and efficient ways to source appropriate peer reviewers for specific journals and reach out to early career researchers from different disciplines and parts of the world,’ she says. ‘More often than not, our editors will seek out individuals that can contribute to the research and body of knowledge out there in more diverse ways... but it is challenging.’

Publon’s latest report highlighted an associated worry; reviewer fatigue. Results revealed that reviewer completion rates are decreasing each year, while the total number of review invitations sent is increasing at 9.8 per cent year-on-year.

As Marchetti comments: ‘Reviewer fatigue varies by journal, editor and discipline. But in some niche areas... it’s not unusual for us to see reviewers who are used quite frequently.’
Professor Ben List, director of homogeneous catalysis at Max Planck Institute for Kohlenforschung and editor-in-chief of organic synthesis journal, Synlett, has devised a new collaborative version of peer review – crowd review – that could well reduce reviewer fatigue. Put simply, a ‘crowd’ comprising between 30 to 100 researchers is recruited following recommendations from editors. The crowd receives an email invitation and link to the manuscript, and reviews can be discussed among all researchers involved.

The researchers typically have around 72 hours to respond, while results of the discussion can either replace or complement traditional peer review. Importantly, the process can take as little as a week, while traditional peer review can take months.

List first trialled the system on Synlett manuscripts in early 2017, with the papers quickly receiving comments that editors considered informative. Fast forward to today and 147 papers have been crowd reviewed from Synlett, authors are responding well and an enthusiastic crowd of reviewers has been maintained.

‘I am more interested in quality than transparency, [so] our intention has been to improve the quality of peer review and ultimately published papers, and indeed, the speed and accuracy of crowd review are better than in traditional peer review,’ says List. ‘The response to crowd review has been faster, some details get more attention and the comments have been more to the point,’ he adds. ‘Reviewers critically discuss details, provide substantive comments, and some give an overall recommendation with the editorial office making the ultimate decision.’

Given the success of crowd review on Synlett, its publisher Thieme is extending the process to other chemistry journals, as well as medical research journals. What’s more, several other chemistry and interdisciplinary journals are also evaluating crowd review and related approaches, while a major German funding organisation is implementing crowd review to evaluate its research proposals.

But while crowd review has run smoothly on Synlett, with its strong research community, does the process actually have mass appeal and can it be successfully extended to broader journals with a more varied research community?

According to List, yes. ‘Probably it would be more appropriate to create sub-crowds here, to more accurately evaluate manuscripts from different disciplines,’ he says. ‘But I certainly think so.’

Publons’ Preston is eagerly watching the development of crowd review, and says: “If it works at scale, it will be awesome.” But in the meantime, he highlights how building tools can also help editors more quickly search and find peer reviewers beyond their typical reviewer networks, thereby reducing peer reviewer fatigue and increasing all-important diversity.
One key example is Publons Reviewer Connect, designed to help scholarly journal editors find, screen and connect with the subject matter experts needed to peer review manuscript submissions. 

‘This was only really made possible by combining Web of Science data with Publons Reviewer data, and allows you to properly screen a reviewer and contact them directly,’ he says.

‘It is really exciting and with it, we have [access to] researchers from around the world... And very soon users will be able to select researchers from specific regions.’
Preston also reckons that Publons Academy, which provides online tools to train researchers in the core competencies of peer review, could also tap into more researchers from emerging economies at earlier stages of their careers.

And, crucially, he also hopes that Publons Academy, will, at some point, be integrated with Reviewer Connect, so emerging-economy researchers can more easily show their expertise to editors in established geographical regions.

‘[The Clarivate acquisition] has allowed us to expand our ambitions and we now have a reach that we just didn’t have before,’ says Preston. ‘For example, we can now go to China and get more researchers involved with us... so this and continued development really helps.’

Quality counts

But as organisations continue to push for more diversity and reduce reviewer fatigue, measuring the quality of peer review remains a priority. Right now, the length of a review is typically used to assess quality, but few would argue that the scholarly community needs a more robust measure for this. Preston agrees.

In recent years, Publons has focused on measuring the quality of peer review, with, for example, editors being asked to rate reviews on the basis of helpfulness and timeliness. ‘We continue to encourage editors and, potentially, authors to evaluate the quality of reviews but this is easier said than done,’ he says. ‘It has been difficult to get engagement but we will continue to experiment with this.’

Indeed, as Marchetti highlights, SAGE editors have the facility to rate reviews on ScholarOne. But while these reviews can then be used by other editors to source reviewers, she believes that the subjectivity of this process has hindered take-up. ‘I do think that transparent review can help to improve review quality here though,’ she points out. ‘And we also use plagiarism detection software, which also helps with the quality of peer review.’

Preston also highlights how his organisation is now looking to augment the measurement of quality with tools such as sentiment analysis, which uses natural language processing to extract positive, neutral or negative comments from reviews.

‘We already have the partnerships in place and will be doing research in Publons’ reviews to see if we can pull out interesting results, in terms of sentiment analysis, and directly measure review quality,’ he says.

‘We’ve already done a ton of development on review transparency and in the next 12 months will be really focusing on making this all work at scale, in a way that benefits everyone.’

A novel trial from eLife

Last year, eLife trialled a peer-review process designed to give authors more control over the decision to publish. As part of this, new submissions were initially evaluated by editors, with articles being invited for in-depth peer review. Then, after peer review and consultation among referees, the editor compiled a decision letter for the author.

The author then decided how to respond to the letter and submitted a revised manuscript, as well as a response to the decision letter and reports. The editor evaluated this response, with the revised article being published alongside the decision letter, full review reports, author response and editor assessment.

As eLife executive director, Mark Patterson highlights that he and colleagues initially wanted to explore the impact of giving authors greater control over how they respond to comments from peer reviewers.

‘We thought the process could have several long-term benefits for research communication, by making it more efficient and constructive,’ he says.

The first ‘encouraging’ results have emerged, and according to Patterson’s colleague, Andy Collings, executive editor of eLife, the trial has proven popular. ‘Almost a third of authors opted in during the trial period... with the most common reasons for opting in relating to the efficiency of the process, and support for innovation and transparency,’ he says.

The eLife team is still assessing if this novel peer review is more efficient than traditional processes.

However, as Collings says: ‘We do hope the process provides an incentive for reviewers to participate and provide constructive criticism, since it’s highly likely both the paper and peer reviews will be published.’

Interestingly, an initial observation is that late-career authors fared better at being sent for in-depth peer review than early- and mid-career counterparts. 
‘These results are somewhat tentative... but if [they] hold we will need to consider ways to ensure that less-established researchers are not disadvantaged,’ says Patterson. ‘The trial process [also] appears to be less popular in some countries, including China, which is another finding that requires further examination.’

Patterson, Collings and colleagues now intend to address more questions including: how long does peer review take during the process and what proportion of papers ended up getting published?