Innovation, technology, and the pace of change
Peer Review Week 2024 organisers report ‘remarkable’ response to community poll, write Maria Machado and Anne T Stone
Peer Review Week (PRW) 2024 with the theme of “Innovation and Technology in Peer Review” is now only weeks away.
We were confident in selecting the theme based on community response to our poll, but we didn’t anticipate the pace of change. It is remarkable. There was also strong interest in ethics and IDEA (Inclusion, Diversity, Equity, and Accessibility). We look forward to seeing what organisations share about their efforts to innovate come September.
Are you a champion for change or does it feel like innovation is someone else’s job at your workplace? Everyone can cultivate an innovator’s mindset for themselves and in their organisation. Read on for industry insights and developments in 2024 across innovation and change management, researcher attitudes about peer review, and the ethical use of AI in research and scholarly communication.
Leading change and accelerating innovation in publishing
At the London Book Fair in March, Charlotte Talmage, industry change expert, assembled a panel of three publishing leaders to address strategies for managing change. Talmage captured themes of embracing adaptability, empowering employees as owners of change, and strengthening internal alignment through clear communication and collaborative decision-making. The panellists offered nine key take-awaysfor leaders and individuals.
Antonia Seymour, from the Institute of Physics Publishing, told the audience: “Building awareness of individual and collective strengths fosters the adaptability required to respond with agility as conditions evolve. Celebrate small successes along the way.”
This was recently mirrored by Soren Kaplan at USC’s Marshall School of Business, who offered three steps to develop an innovator’s mindset in a Harvard Business Review article: embracing curiosity, fostering continuous learning, and encouraging open collaboration.
How can the experience of the peer review process be improved?
Organisations prioritise areas for innovation and change where they see opportunity. The IOP Publishing State of peer review 2024 report provides input from the physical sciences community on opportunities to improve peer review. The top three needs for innovation and initiatives to ‘significantly’ or ‘overwhelmingly’ improve respondents’ experience of the peer review process were:
-
Improvements to online manuscript and review submission systems;
-
More recognition and communication between authors, reviewers, and editors regarding feedback on the quality of reports and influence on editorial decisions; and
-
Better and more accessible peer review training.
Driven by increasing submission volume and demands for speedy publication, adapting submission and peer review systems has been constant since E-Journal Press, ScholarOne, and Aries Editorial Manager launched way back at the turn of this century.
AI is already part of many solutions used by publishers and technology providers. For example, launched in 2011, iThenticate uses large language models to detect plagiarism and is widely integrated into workflows and platforms. At the Society for Scholarly Publishing (SSP) annual conference in May, technology providers and publishers introduced new products and enhancements centred on the author’s experience, from writing and submission to review and publication. Several tools use AI to evaluate and improve the quality of submissions, while tackling speed and ease. Peer reviewers and authors may not be aware of when and how these tools are used. Ensuring that training and guidelines are updated as use of technology evolves is essential to fulfil commitments to transparency.
As desk-rejections drop, there will be more pressure on editors to find qualified peer reviewers faster – and there are new AI solutions to address this problem. Using AI to screen for indicators of misconduct prior to submission may save time and mitigate risks but concerns about false positives and ethical considerations arise. Principles of transparency and equity need to be considered and policies regarding reporting misconduct updated.
Last year, NISO released Standard Terminology for Peer Review to foster norms for communicating about peer review. NISO encourages publishers to update guides for authors and reviewers to include information about the technical tools used in the editorial process. And, in June, EASE launched a Peer Review Quality Assessment Tool that can help shape reviewer-editor communication and feedback loops.
Researchers’ attitudes on the ethical use of AI
When ChatGPT was launched by OpenAI in November 2022, it was as a ‘freely available research preview.’ OpenAI was once a non-profit company that would release its projects and code to the public, but as a commercial product, ChatGPT is not open source. The black box nature of generative AI tools and lack of transparency pertaining to the data that trains these tools continues to be problematic.
Elsevier surveyed 2,284 researchers and 1,007 clinicians between December 2023 and February 2024, to inform Insights 2024: Attitudes toward AI. The vast majority (85%) had concerns about the ethical implications of AI in their area of work, and 40% of those (n=2,561) believed ‘not enough regulation or governance’ was a disadvantage.
However, the number-one disadvantage of AI emerged as its inability to replace human creativity, judgement, and/or empathy, while concerns for confidentiality were ranked much lower.
During the SSP conference in May, Abhi Arun, CEO ofTNQTech, reflected on the nature of research: “New knowledge requires divergent thinking, which is essential for research. AI will influence the process of research, including the way it is conducted and communicated. A balanced and coordinated dance between AI’s processing capabilities and human intuition will lead to new perspectives, accelerating research and new forms of knowledge.
“A potential risk is over-reliance on Gen AI solutions which may restrict creation of new knowledge. Innovation and divergent thinking go hand in hand. While AI has shown great potential in suggesting ideas and patterns that humans might not immediately recognize (as demonstrated by the AlphaFold project from Google DeepMind), true divergence is a uniquely human value that AI can not replace.”
How is AI being used in peer review?
Could AI-augmented peer review propagate existing biases? This seems to be its inherent characteristic, even when it is trained on curated data[1]. However, the speed of the process could be hastened with the help of tools that assist publishers overwhelmed with low quality submissions.
This is what Chris Leonard, Director of Strategy and Innovation at Cactus Communications, tells us. He is the mind behind Scalene, a newsletter on the intersection of peer review, AI and humans. Chris believes we are a long way from AI-generated peer review reports. The discussion on whether to move the peer review process to the pre-submission stage has been simmering, with many advocating that the role of the peer reviewer has not so much been expanded, but definitely altered. The evolving technologies now available to assist human reviewers with their work in interpreting new scientific knowledge should always be acknowledged.
Though it may seem unlikely now, the potential use of AI-generated peer reports bears consideration. Chhavi Chauhan, Director of Scientific Outreach at American Society for Investigative Pathology, and George Currie, Content Manager at eLife, conclude “It is a question of when, rather than if, we will see a journal embrace Gen AI as a “peer” reviewer, either in addition to human reviews or as a replacement” in their recent commentary [2]. They consider two large-scale studies showing that AI models rate manuscripts similarly to human reviewers, and most rated Gen AI feedback as more beneficial than some human reviews. Advantages of Gen AI earlier in manuscript preparation, expedited editorial workflows, benchmarking quality of reviewer reports, and the possibility of Gen AI as a third or fourth reviewer are all opportunities to explore. Recognizing real risks of exacerbating existing inequities and creating new barriers, they call on all stakeholders to develop clear policies to enable responsible use of Gen AI in peer review.
Developing guidelines for the ethical use of AI
The US NIH prohibits the use of generative AI technologies for grant peer reviews and has updated security, confidentiality, and non-disclosure agreements for peer reviewers. At the Charleston in Between event, Mohammad Hosseini presented “Why Integrity in Research and Publishing Matter”(recorded), discussing several issues that scientists using AI are facing in the context of ethical norms of science. Resnick and Hosseini [3] propose nine Recommendations for Ethical Use of AI in Research in context of ethical norms of research to bridge gaps in policy and guidelines for responsible conduct of research and publishing.
In publishing, Springer Nature identified five principles, and Elsevier shares its responsible AI principles. Common themes emerge primarily fairness and transparency. Policies, processes, and technology must address norms of accountability, objectivity, and social responsibility, together with legal, rigour and reproducibility requirements.
Cactus, provider of services and technology solutions for researchers and publishers, shared version 1.0 of its generative AI policy for freelancers in late July. They state that “the unregulated use of AI by individual freelancers and contractors is prohibited, to ensure alignment with client expectations (…) It also helps prevent potential non-compliance issues that could arise from unregulated use.” The emphasis on seeking alignment of policies across the research ecosystem is well-placed and urgently needed.
In May 2023, ICMJE revised its guidelines stating how research conducted with assistance of AI should be acknowledged and that peer reviewers should not upload manuscripts to platforms that cannot guarantee confidentiality. Because the opacity of AI systems is problematic, organisations may choose to tighten regulations.
At Cactus, this type of technology has been moved upstream to authors with Paperpal, developed on open access, peer reviewed literature, and which promotes responsible writing with AI tools. In particular, Shilpi Mehra (Associate Director, Publication Support Services, Cactus) said that, for biomedical science papers – whose structure is more proscriptive due to their required compliance with checklists and guidelines – AI-informed processes could screen for references that are too old, citations of retracted papers, excessive self-citations, and even invalid URLs. Perhaps using this with the REAPPRAISED checklist [4] would be a good opportunity to test the scalability of these tools. However, the reasoning that is characteristic of peer review –the context and novelty evaluation– is still thoroughly human.
As the EASE Peer Review Committee seeks input on its draft toolkit ‘Recommendations on the Use of AI in Scholarly Communications’ by September 15, 2024, there is clearly a growing need to find ways for the rapid dissemination of peer review appraisal, to improve communication, to promote publishing ethics and research integrity, while ensuring recognised governance and the development of tools to mitigate risks.
We look forward to learning how our community is tackling these issues during #PRW2024!
Maria Machado is current co-chair of Peer Review Week. Anne Stone is a marketing consultant at Stone Strategic Marketing Services.
References
[1] Hosseini, M., Horbach, S.P.J.M. Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Res Integr Peer Rev 8, 4 (2023). https://doi.org/10.1186/s41073-023-00133-5
[2] Chauhan, C., Currie, G. The Impact of Generative Artificial Intelligence on the External Review of Scientific Manuscripts and Editorial Peer Review Processes. Am J Pathol., article in press (2024); https://doi.org/10.1016/j.ajpath.2024.08.002
[3] Resnik, D.B., Hosseini, M. The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00493-8
[4] Gray, A.,Bolland, M.J. Avenell, A., Klein, Andrew A., Gunsalus, C.K. Check for publication integrity before misconduct. Nature 577, 167-169 (2020). https://www.nature.com/articles/d41586-019-03959-6