Pandemic brings preprints into the spotlight

Share this on social media:

Issue: 

From rapid disease information to a way to promote and share regional knowledge in multiple languages, preprints have come into their own in recent years. Siân Harris finds out more

Image: Catarina Williams/Shutterstock.com

In July 2021, the open-access biomedical and life sciences publisher, eLife, began a new approach to article submissions: now it will only review manuscripts that have already been published as preprints. Alongside this, the focus of the publisher’s editorial process has shifted to producing public reviews that are posted alongside the preprints.

Such a move is an indication of how preprints – the versions of papers before they have been externally peer reviewed or edited – have become more central to the whole scholarly communication process over the past few years.

As Damian Pattinson, executive director of eLife, explained: ‘[Preprints] play a very central role in our entire review process and we’re keen to make them more central. To us, they are an amazing opportunity to change the system and to bring the way in which research is communicated into a fully digital age. The role that we see eLife playing in the long term is one where we essentially are just a reviewing organisation for already published research.’

Researchers in many mathematics, computer science and physics fields have enjoyed a close relationship with preprints for several decades and processes like eLife has recently adopted are not new to them (see box for some reflections from Australia-based astrophysicist, Richard de Grijs). However, it has been recent public health crises that have really propelled the practice of preprint sharing into the mainstream across other disciplines.

Making scientific research findings available quickly enables other scientists to build on these findings. It also means that policymakers can use the findings to develop policies.

Michele Avissar-Whiting, editor-in-chief of preprint server Research Square, noted it was the recent Zika and Ebola outbreaks that first prompted funders of medical research to push for preprint sharing for emergency medical situations. And this has been particularly helpful with the advent of COVID-19 and the need to share findings quickly about symptoms, how it is spread, characterisation of the virus and the emergence of new variants.

‘If there’s one silver lining to this last couple of years it’s been that it forced people’s hand on [preprints]. It just would have been insane if we had adhered to the timelines and the restrictions around releasing findings that are imposed by the traditional system. It would have been crazy to wait three months, six months, a year to publish a sequence. Preprints kind-of saved the day but they also showed that it doesn’t need to be an emergency-only protocol.’

She added that the rapid progress with COVID-19 responses, aided by preprint availability, could be replicated to speed up the process of understanding and finding treatments for other illnesses, such as cancer.

Beyond health

The potential of sharing preprints goes beyond the rapid exchange of health information, according to Joy Owango, executive director of Kenya-based Training Centre in Communication (TCC Africa). TCC Africa is working with AfricArXiv to ensure the continental preprint archive, which was launched in 2018 and now includes submissions from 33 countries in Africa (see map), is sustainable.

‘When it comes to preprints in Africa, we are looking at research visibility, data sovereignty, and promoting indigenous knowledge. All submitted research items (manuscripts, datasets, slide decks, research reports and proposals) get persistent identifiers to establish priority of discovery to the African authors. AfricArXiv is open to submissions of research items in any language relevant to the continent, including Setswana, Amharic and Igbo, as well as all the African Union languages – French, Swahili, Arabic, Portuguese [and] English. We finally have a platform that is sharing our research output with the world, beating the notion that if research is not in English, it is not accessible.’ 

AfricArXiv has been well received by researchers, she continued: ‘They are interested in the services the platform offers; it’s free for individuals and can be used for research dissemination in a way that is complementing journal publishing with immediate effect for research visibility and reputation building.’

She gave the example of Kenya, where the academic regulatory body, Commission of University Education, made it mandatory for every Masters’ and PhD student to publish before they graduate. ‘The staggering fees for article processing charges (APC) at many journals led to an unfortunate rise in predatory journal publishing,’ she said. ‘It just skyrocketed, and then the stress of going through submission to acceptance – it can be six months, one year – and if you end up publishing in a predatory journal, that leads to a loss in reputation for the researcher and often the affiliated institution alike.’ 

Using preprint archiving as part of the manuscript submission process to a journal is a game changer; preprints are assigned Digital Object Identifiers (DOIs), along with an open license to the manuscripts and other research items to make them citable; and downloads and citations can be measured for bibliometric analyses. She adds: ‘This gives a level of confidence and ease to the researchers as they identify potential certified and affordable journals to publish in. The use of preprint servers in the publishing process, especially if linked with institutional repositories, can strengthen the relationship between librarians and research offices within universities.’

Avissar-Whiting agreed about the impact of preprints on researchers: ‘They’ve introduced researchers to a new way of doing things. One of the main reasons they post the preprint to begin is that they have been toiling away on this research and have really only consulted with the people in their nearby lab group, so they are curious what the world thinks of what they’ve done and want an honest assessment of it.’

Research Square’s users – both authors and readers – are distributed around the world (see map). But Avissar-Whiting observed that preprints have not yet been embraced everywhere, noting that sharing of preprints is not currently common practice in some countries. 

Pattinson hopes for preprints to be embraced everywhere: ‘For a genuinely inclusive system you have an opportunity for everyone [globally] to post their work alongside one another, and then the review and curation happens on top, so they have this level playing field. The opportunity for more inclusive systems is a key driver for why we’re doing this, but it is hard.’

At the moment, he explained, ‘the people who post preprints are generally the bigger labs, the more experienced researchers, the people who have more confidence in putting their work out before it’s been formally reviewed’.

And researcher behaviour is quite subject specific too, he continued, noting that eLife’s authors in more computational fields, such as neuroscience, cell biology and genetic genomics, are more comfortable with posting preprints than other disciplines. ‘eLife is beginning to publish more clinical work and that poses challenges because a lot of clinicians are less aware of preprints and there’s more explaining to do.’

Opening up science

Preprints can also play an important access role. Although preprints will not satisfy many, if any, modern open access mandates, they do enable research findings to be read by anyone and preprint servers enable them to be searchable and discoverable. And they play a wider role in open science, paving the way for open review and bringing in new ideas to improve the quality of the final, published output.

As Pattinson commented: ‘The goal is to have feedback of everything we review and we should be posting that feedback publicly because it’s incredibly useful information. Authors have an obvious nervousness around posting reviews that are critical of their work but, in practice, the reviews are very constructive, not rude or aggressive, but useful feedback. We feel readers would benefit from knowing if there are any issues that need addressing on a preprint.’ 

He is also enthusiastic about a new approach to open review emerging off the back of preprints, especially as a result of COVID-19. ‘We are very interested in models where groups of academics find interesting Covid papers and review them and then just go ahead and post their reviews on their own websites. It really was born out of necessity – exactly how the best things are.’

Challenges

Despite the enthusiasm from many, there have also been plenty of words of criticism about preprints. In particular, the ready and rapid availability of preprints related to COVID-19 has led to some confusion and misunderstanding, and some sceptics of vaccination or of the existence of the virus itself cherry picking from scientific findings to claim authenticity to their theories. 

As Alejandra Arreola Triana, who teaches science communication at Universidad Autónoma de Nuevo León, in Monterrey, Mexico, summed up: ‘As an author’s editor, I like the idea of setting precedent and perhaps getting comments or citations. But, as someone who trained in science journalism, I saw they caused some serious misunderstandings during the pandemic.’

Avissar-Whiting acknowledged these concerns: ‘Every platform has struggled with the misinformation and disinformation problem and dealt with it differently. We made the decision to take a bit of a heavy hand.’ She explained the approach that Research Square has taken of screening manuscripts, deciding whether to publish the preprints and, if so, whether to work with the authors on producing lay explainer texts.

‘We felt we had to do something about it and we did have the resources in-house to [do so]. I always point to the T cell paper that came out in June 2020. It was from a lab in Germany and was one of several studies talking about T cell immunity and how we have some immunity from our exposure to coronaviruses that cause the common cold. It was a very complex immunology paper, but in it the authors say something like 80 per cent of us have some remnants of this immunity from coronaviruses. 

‘That paper, if you cherry picked it apart, played into that [Covid hoax] narrative and so that preprint went crazy viral, but it was all within this circle of people saying it’s a hoax – an insane take given the whole world was being crushed by sickness. What we did was write a lay summary and put it on the preprint. People started screen capturing the lay summary and putting it on Twitter whenever people would take that study as evidence of Covid not existing. I don’t know how much of an impact that had on quelling the confusion around [the research], but I do know people noticed it and used it exactly the way we intended for them to use, as a plain language explanation of what this preprint was talking about. 

‘The interesting thing is, the manuscript went on to be published in Nature Immunology and it still got lots of attention from the same [Covid hoax] crowd, so having been peer reviewed doesn’t change the fact that people can take information and twist it or misunderstand complicated research.’

More broadly, this is part of making clear to readers what preprints are and are not. ‘We have this disclaimer on our preprints that I think most of the preprint servers have, which says this is a preprint, it’s not been peer-reviewed by a journal (you want to be specific as there are other mechanisms for peer review).’

However, she added: ‘Scientists who are finding preprints on servers that are specifically related to their field subject matter are the experts; these are the people who are in the best position to scrutinise a paper. We want them to look at it and, ideally, we want them to comment on it and review it formally.’

Pattinson also observed: ‘Preprint archives are extremely careful about what they post and have professional editors looking at papers to make sure there aren’t going to be significant issues. I think perhaps that bit has got a bit lost. Overall, the risk of [dubious studies being shared as preprints] probably isn’t much higher than things getting published in journals and slipping through those channels.’ 

He does see another challenge, however: how preprint servers can indicate more clearly if something has been peer-reviewed. ‘At the moment, all preprints have the same disclaimer essentially – even papers where the reviews are posted clearly alongside. Ultimately, that is something we would like to see improve in the future.’

Looking to the future

It is clear preprints are now solidly part of the scholarly communication process, but some challenges remain. One of these is the challenge of wider acceptance, which comes with support from research funders and decision-makers.

As Owango explained, particularly regarding preprints in Africa: ‘With time, we are looking at a situation whereby preprints are considered as a research output for promotions, and for somebody to graduate. For that to happen, you’re talking at a policy level. It’s convincing university education commissions: this is a new technology in academic publishing, this person has already identified or is in the process of identifying their journal and this output is already collecting citations and downloads.’

Regarding funders, Avissar-Whiting noted that ‘some have really taken quite a strong position of requiring preprint deposition and most of the ones that aren’t requiring it now have some line in their policies about encouraging the use of preprints and preprint review mechanisms’. Pattinson agreed about funder support, noting that ‘funders love them… I think that will drive this output.’

Technology developments will also play a wider role in acceptance. Already, as Owango highlighted, the use of DOIs in preprint servers has provided permanence and made it easier to cite preprints. But there is more that can be done to use DOIs within preprint records.

Avissar-Whiting recently conducted research into preprints of papers that went on to be published and then retracted. The number of such examples is still very small, but she says it highlights the importance of knowing what happens with a paper downstream: ‘We’d want to be aware of these things so that we can link them backwards. CrossRef is really going to help with this because if one party creates the association, theoretically, information can flow in both directions.’

There is potential too in working on how preprint data is structured, tagged and reused. One avenue being explored by some is the use of artificial intelligence for data mining preprints. Translation tools also open up new opportunities, as does the possibility of incorporating blockchain technologies. 

Pattinson notes that eLife wants to explore ways to blur the boundaries between preprints and journal articles – for example, by improving the display rendering to make figures more useful, having the sorts of metadata that you would associate with the journal article.

‘It is an opportunity to do things better and revise the system,’ he summed up. ‘All of the waste that happens in the current system, where everything is done behind closed doors and reviews are thrown in the bin every time a paper is rejected, we need that to go. We see preprints genuinely as [one of] the most exciting innovations in biomedical science for 20 years.’  

 

An astrophysicist’s perspective

By Prof. Richard de Grijs, School of Mathematical and Physical Sciences, Macquarie University, Australia

In our field, we have become used to the arXiv preprint server, which has included an astrophysics section (astro-ph) since the 1990s. It has become a standard part of our publishing routine to submit our papers to arXiv upon acceptance – to the extent that, when a paper led by a collaborator doesn’t appear on arXiv for a few days after acceptance, we start to wonder why there is a delay.

Some colleagues submit their papers to arXiv upon submission to a journal, or if the review process takes too long in their view (in which case they may pointedly include the submission date in the comments box!). I prefer to wait until acceptance; I have been burnt a (very) small number of times earlier in my career when it turned out that the referees’ reports required major revision.

When collaborators ask whether they have my approval to submit to arXiv prior to acceptance, I am usually very reluctant to agree. I might agree for papers in areas that are highly competitive, so there is a need to establish priority, or data papers that we know other groups may want to use before the whole review process is over.

There are a few issues with using arXiv preprints – fewer if they refer to accepted papers – but if you want to refer to a preprint uploaded before acceptance, it is important to make sure the results are still valid after acceptance. Most of the journals in our field actively add their publication information to the arXiv records where possible, although this may take some time.

Another issue I have seen crop up from time to time is that arXiv’s management has changed over the years to become more selective. There are quite a few anecdotes of people whose work was rejected by arXiv without clearly stated reasons, or whose papers were reclassified by arXiv staff without clear rationales or consultation with the authors. The system has become so large – and important to the community (if your paper isn’t on arXiv, some people won’t read it – ever) – that there is a need for more transparent processes.

Finally, I should mention that arXiv has become even more important to colleagues in some cognate fields, like cosmology or particle physics. In those fields, some journals require authors to upload their new papers to arXiv and provide the number to the journal as their ‘submission’ to the journal. Other journals in those fields have become overlay journals, where they collect arXiv papers into issues.