Reimagining peer review: a case for innovation

Mitja-Alexander Linss is Head of Marketing at Karger Publishers

AI can simplify the process enormously and help publishers get ahead of the industry’s upheavals, writes Mitja-Alexander Linss

The landscape of academic publishing is undergoing seismic shifts due to technological advances, funding challenges, and the increased need for speed to publication. At the same time, the rise of AI has drawn concerns from the academic community around the issue of research ethics, fraud, and plagiarism. The peer review process is the epicentre of it all, requiring publishers to ensure that research integrity and scientific rigour are not compromised. 

The Smithsonian magazine recently reported on the ways in which researchers are fighting back AI-driven systems by implanting hidden prompts using “invisible” white text instructing AI reviewers to give positive responses. A survey of about 5,000 researchers found that around 19% had already used large language models (LLMs) to make the peer review process faster and easier. And separately, a study of peer-review reports submitted to AI conferences in 2023 and 2024 revealed that between 7% and 17% showed evidence of being substantially modified by LLMs, indicating changes more significant than simple proofreading or minor text adjustments.

AI-reviewed manuscripts

The integration of AI into the peer review process raises a number of ethical concerns, particularly around transparency, accountability, and the role of human judgment. Many AI systems, especially those based on deep learning, operate as “black boxes,” making it difficult to explain how they reached a certain conclusion about a manuscript’s quality or relevance. The lack of transparency can undermine trust in the review process, particularly if authors or reviewers are not informed about how AI tools are being used or how their outputs are influencing editorial decisions. 

Accountability brings it to a whole different level. In traditional peer review, human reviewers can be held responsible for their evaluations, but when AI is involved, either in reviewer assignment, quality assessment, or directly reviewing manuscripts, it’s unclear who is accountable for errors, oversights or possible biases that may arise. AI tools are trained on existing data, which may contain human biases related to gender, geography, institution, or research field. If these biases are not carefully managed, AI can perpetuate or even amplify existing inequities in publishing, thereby compromising the entire research ecosystem.

The use of AI as a reviewer, meaning AI systems that generate structured reviews or make recommendations on acceptance, remains highly controversial. While some experimental systems can produce reasonably coherent critiques, they lack true understanding, cannot engage in scholarly debate, and may miss novel contributions that fall outside the patterns they were trained on. Some argue that AI reviewers could be used to augment human review by identifying surface-level issues and suggesting questions for human reviewers to consider, but not to replace them. 

In some cases, the difference between human- and AI-driven reviews is negligible. In a study involving over 300 U.S.-based computational biologists and AI researchers, the findings suggested that the performance gap between human reviewers and large language models (LLMs) such as GPT-4 may not be as wide as assumed. Participants were given peer reviews of their own papers, some of which were written by humans and others by GPT-4. More than 57% of participants rated feedback generated by GPT-4 as helpful or very helpful, and more than 82% considered it more useful than feedback from at least some human reviewers.

To ethically integrate AI in peer review, clear guidelines, human oversight, and transparency are essential. Authors and reviewers should be informed when AI tools are involved and editorial decisions should remain firmly in human hands. Done thoughtfully, AI can indeed enhance the integrity and efficiency of peer review, but only when used as a tool, not as a substitute, for human expertise.

Streamlining peer review and fraud detection

AI-assisted peer review, not to be confused with AI-based peer review that requires no human input whatsoever, brings a range of detailed advantages that can transform the quality and efficiency of scholarly publishing. One of the most significant benefits is the automation of reviewer selection. AI can rapidly scan vast databases of publications, authors, and research profiles to identify reviewers whose expertise closely matches the submitted manuscript, thereby reducing reliance on editors’ personal networks and helping to mitigate unconscious bias. Globalcampus.ai is one such solution that offers peer reviewer recommendations and streamlines the selection of qualified experts to ensure thorough and rigorous manuscript evaluation.

In terms of manuscript evaluation, AI tools can automatically detect plagiarism, image manipulation, statistical inconsistencies, or language issues at a level of scale and speed that human reviewers cannot match. This frees reviewers to focus more on the intellectual merit and scientific validity of the work. AI systems can also offer predictive assessments of a manuscript’s impact or novelty based on citation patterns, textual similarity to existing work, or author reputation. Scite.ai is one great tool that helps identify retracted references, for example. 

In addition, AI can support continuous monitoring after publication. Tools that analyze citation networks, author behavior and unusual publication trends can flag suspicious activity such as self-citation rings or paper mills, thereby helping maintain the integrity of the academic record. Importantly, when used transparently and ethically, AI adds a layer of consistency and auditability to the peer review process, ensuring that every submission is held to the same standards and that editorial decisions are better informed by data. 

A path to innovation

The Vesalius Innovation Award (VIA) is one way to support the publishing industry’s future and drive innovation in peer review. This initiative is aimed at early-stage startups in health sciences and academic publishing who are developing new approaches to how science is shared and applied. The award provides selected startups with structured mentorship, expert evaluation, and the opportunity to present their ideas at the STM Innovator Fair in London every December.

Several of the award finalists are tackling peer review and research integrity in a variety of ways. Clear Skies has developed “Papermill Alarm,” a tool launched in 2022 that scans the scholarly literature at scale to detect patterns of paper-mill generated fraud, making organised research misconduct visible for publishers and institutions. Prophy builds a massive database over 170 million papers and 60 million researcher profiles and relies on AI-based reviewer-matching tools to automatically recommend independent, qualified reviewers while prioritising expertise, fairness, and diversity in peer review assignments. Signals seeks to enhance trust by surfacing “signals” embedded in complex networks of articles, authors, institutions, and citations and combines those analytical patterns with expert evaluation to flag problematic or notable research and help stakeholders assess reliability before or after publication.

In a time of budgetary pressures and increasing complexity, it is important to invest in initiatives that promote innovation, not only to address today’s challenges, but also to prepare for the future of scholarly communication through the wise and prudent use of available technology. AI’s involvement in the peer review process is an inevitable part of the scientific community’s evolution. When done well, it can simplify the process enormously and help publishers get ahead of the industry’s upheavals, managing peer review with accuracy in a fraction of the time.

Mitja-Alexander Linss is Head of Marketing at Karger Publishers, where he leads global marketing and supports growth across Karger’s traditional medical publishing business and the company’s new healthcare consulting and engagement practice.

Keep up to date with all the latest industry news and analysis – SUBSCRIBE to the Research Information Newsline!

Back to top