Navigating the future of peer review in the age of generative AI

Laura Feetham-Walker explains how researchers are engaging with AI, the concerns it raises, and the importance of human expertise
A new global study from IOP Publishing reveals a growing divide in how researchers view the role of AI in peer review and whether they’re using it.
While 41% believe generative AI could have a positive impact, 37% see it as negative, and 22% remain neutral. These mixed views highlight just how complex and fast evolving the conversation around AI in scholarly publishing has become.
IOP Publishing’s policy currently prohibits the use of AI in peer review, as generative models cannot meet the ethical, legal, and scholarly standards required. However, there is growing recognition of AI’s potential to support aspects of the process.
What are the main challenges facing peer review in the era of generative AI?
The last few years have seen a rapid increase in the use of consumer generative AI in almost all industries. Large language models (LLMs) are now easily accessible to anyone with an internet connection and are being used in a variety of ways. Their potential role in peer review, however, raises a number of ethical and practical concerns.
Although we currently prohibit the use of AI in peer review, we do see potential for AI to play a supportive role, and we’re actively exploring how it can be used responsibly to assist, rather than replace, human judgment.
One of the most pressing concerns is ethics and integrity. Should AI-generated reviews be introduced to the scholarly process, we first need to assess to what extent these kinds of reviews could unintentionally compromise the objectivity and fairness of the process.
AI models are trained on vast datasets, which often contain inherent biases – whether cultural, disciplinary, or demographic. If these biases are reflected in AI-generated feedback, they could skew the evaluation of manuscripts, favouring certain perspectives or methodologies over others. This risks undermining the impartiality that peer review is meant to uphold.
Also, current AI tools lack the contextual understanding and domain-specific expertise that human reviewers bring. While it can mimic structure and tone, it cannot reliably assess the novelty, rigour, or relevance of scientific work. This raises concerns about the depth and quality of AI-generated reviews, especially in complex or emerging fields
At the same time, we’re seeing a surge in submissions – some of which are AI-assisted – adding to the burden on an already stretched reviewer community and contributing to reviewer fatigue. AI could potentially assist in taking away some of the more administrative tasks in the review process such as improving grammar and clarity in reviewer reports, checking for completeness of submissions or adherence to formatting guidelines, or flagging potential ethical issues, such as undisclosed conflicts of interest. We already use AI tools to detect plagiarism and other research integrity issues.
Another growing concern is the use of third-party AI tools to process confidential manuscripts. This practice risks violating the privacy and confidentiality agreements that are fundamental to scholarly publishing.
What is IOP Publishing’s policy on using AI in peer review?
At IOP Publishing, we follow the Committee on Publication Ethics (COPE) guidelines, which clearly state that AI tools cannot meet authorship criteria. That’s a key principle for us as we believe that AI cannot replace the human element of scholarly assessment.
We currently do not permit the use of generative AI to write peer review reports—either fully or in part. If we suspect that a review has been generated or augmented using AI, we contact the reviewer to inform them of our policy. Depending on the situation, the report may be rescinded or redacted with the reviewer’s knowledge.
You recently surveyed reviewers about their views on AI in peer review. What did the findings reveal?
The results revealed a deep divide in sentiment. Around 41% of respondents believed generative AI would have a positive impact on peer review, while 37% viewed its impact as negative, and 22% thought it would have a neutral or no impact. This represents a noticeable shift from our similar survey we ran in 2024 revealing a deeper polarisation within the reviewer community around the use of AI in peer review.
We also found that 32% of respondents admitted to using generative AI in peer review in some capacity. While the majority (21% of the 32%) used it only for grammar or flow improvements in their comments, 13% said they use AI tools to digest or summarise an article under review and 2% admitted to using AI to write a full review for them. This raises serious concerns particularly around confidentiality and copyright issues. And although only a very small number of reviewers admitted to using AI to create reviews for them, the majority (57%) would be unhappy if AI was used to write a review on their own manuscript, and 42% would be unhappy if it was used to augment a report. This contradiction between how people use AI and how they feel about it being used on their own work points to a broader need for transparency, clear guidelines, and continued education.
Do you think the use of AI in peer review is inevitable?
In some form, yes. It seems inevitable that generative AI will become a supporting tool within peer review. But it’s important to distinguish between supporting human reviewers and replacing them.
For reviewers, AI could be a time-saving assistant. It can help check for consistency and improve the flow and grammar of reviewer reports. For editors, AI can be useful in triaging manuscripts, flagging potential errors, and supporting plagiarism detection. These uses of AI can significantly increase efficiency and help editors manage the rising volume of submissions.
How do you see the role of generative AI in peer review evolving?
Over time, we expect AI to play an even larger role in detecting plagiarism, flagging fraudulent data, and improving efficiency in the review pipeline. But we’re adamant that any AI involvement must be disclosed and should never replace the intellectual and ethical responsibilities of human reviewers.
We’ll also likely see greater use of AI in initial manuscript screening, helping editors make more informed decisions about desk rejections or reviewer assignment. The key is to ensure AI becomes a trusted assistant, not a hidden participant.
Why is it important for the publishing community to consider AI’s role in peer review right now?
Because generative AI is already here, and it’s being used, whether openly or not. The pace of AI development is staggering, and large language models (LLMs) are now widely accessible to anyone with access to the internet. We must proactively shape how it is used in scholarly publishing to uphold the standards of integrity, trust, and quality.
If we don’t bring in consistent policies, training, and tools across the industry, we risk eroding trust in the peer review process.
We invest in reviewer education such as our Peer Review Excellence training and continue to work with the broader publishing community to develop shared standards for ethical AI use.
Tell us more about IOP Publishing’s Peer Review Excellence training?
We see peer review as a highly skilled craft based on critical thinking. It’s something that can’t be automated. Our free Peer Review Excellence training programme is designed to equip reviewers, particularly those early in their careers, with the skills needed to deliver high-quality, constructive reviews. The course covers best practices, common pitfalls, and includes a module dedicated to the ethical considerations of using AI in peer review. It’s part of our wider strategy to diversify the reviewer pool and promote quality and integrity across our journals.
How does this initiative address the challenges highlighted in your survey?
The programme tackles many of the key concerns raised. For example, several survey respondents mistakenly believed that AI could provide logical reasoning or technical analysis when in fact, current LLMs work by predicting text sequences and can’t reason in a truly analytical sense. By clarifying what AI can and cannot do, and by teaching best practices in peer review, we help combat misinformation, improve the quality of reviews, and empower our reviewer community. Ultimately, we aim to ensure that human reviewers remain at the heart of scientific discourse and are supported, not overshadowed, by technology.
The conversation about and use of AI in peer review is just beginning. We welcome continued dialogue, collaboration, and shared learning across the scholarly community to navigate this evolving landscape together.
Laura Feetham-Walker is Reviewer Engagement Manager at IOP Publishing