Frontiers issues AI guidance spanning full publishing lifecycle

Frontiers has released new guidance on the use of artificial intelligence across the research publishing process, covering researchers, editors and peer reviewers, and aiming to move beyond binary “allowed / not allowed” policies.
The framework sets out practical approaches for the use of AI tools throughout the publication lifecycle, with an emphasis on responsible adoption and the need for policy to evolve alongside current practice. According to Frontiers, AI is already widely used across research and peer review workflows, requiring more structured and transparent governance.
The publisher described the guidance as part of its ongoing focus on technology and research integrity, noting its earlier introduction of AI-based checks in publishing processes.
Kamila Markram, Co-founder and CEO of Frontiers, said the initiative reflects the organisation’s long-standing approach to AI: “Frontiers was born digital and has always been an AI-native organisation, committed to developing and delivering state-of-the-art AI tools and technology that aid researchers at every stage of the publishing process and safeguard quality and integrity in peer-review. We back this ethos with safe and responsible use of AI that listens to community needs and feedback. The guidelines launched today are another step in providing a concrete and practical framework that evolves with researcher engagement.”
The guidance draws on findings from a recent Frontiers whitepaper indicating that a majority of peer reviewers are already using AI tools. It argues that policy frameworks should reflect this level of adoption, rather than impose restrictive controls.
Elena Vicario, Director of Research Integrity at Frontiers, said the aim is to provide clearer direction for the sector: “AI use in research and science publishing is already here and provides an unparalleled opportunity to advance scientific discovery and innovation. The publishing industry should not present roadblocks to AI adoption but roadmaps that provide confidence and protect integrity for researchers, editors and reviewers alike when using AI throughout the publishing journey. This is why Frontiers produced this guidance and we are proud to have taken this first step in progressing policy around AI use in research publishing.”
The framework introduces guidance for each role in the publishing process and promotes what it describes as accountable, transparent and risk-aware use of AI. Central to the approach is the “BE WISE” framework, which sets out six principles: transparency, accountability, appropriate tool use, user awareness, integrity safeguards and equity.
In addition, the guidance outlines “permission-to-proceed” checkpoints intended to be applied at key stages of AI use. These include considerations around impact and oversight, governance, permitted inputs and verification. Where these conditions are not met, the guidance recommends limiting AI use to low-impact tasks or avoiding it altogether.
Frontiers has also included practical tools such as prompts, templates, audit logs and workflow examples designed to support implementation in day-to-day publishing activities.
The publisher said the guidance is intended as a “living framework”, which will be updated in response to community feedback. It has invited researchers, publishers and other stakeholders to contribute to the development of future policy approaches as AI adoption continues to expand across scholarly communication.
