Research integrity solutions need to scale both ways

Considering the needs of smaller publishers will yield more efficacious solutions and implementation at every scale, writes Danielle Padula
If you scan the latest articles on any scholarly publishing news site, you’re bound to find at least one story related to research integrity. From paper mills to AI misuse, ethical concerns have skyrocketed in recent years, affecting all stakeholders (I’m preaching to the choir, I know!).
The “Big Five” have responded with big investments in proprietary and external manuscript screening software. Smaller publishers are also feeling pressure to stay ahead of research integrity issues that are or could soon impact them. However, most can’t go “all in” on an array of tools. They have to work within their available editorial personnel and budgetary resources.
There’s a benefit to that limitation: it drives them to be hyper-focused on operational efficiency and to probe the human incentive structures that exacerbate current research integrity issues.
Considering the needs of smaller-sized publishers in research integrity infrastructure development, both human and technical, will yield more efficacious solutions and implementation at every scale. That starts with understanding what they are.
Insight into smaller publishers’ research integrity priorities
Between March and June 2025, Scholastica and Maverick Publishing Specialists conducted a survey of small-to-mid-sized scholarly publishers (defined as those with fewer than 1,000 employees) to gain insight into their near- and long-term technical priorities, with a focus on research integrity and the rise of AI. The survey findings, published in the “Technology Needs of Small and Medium Journal Publishers” report, reveal that small and mid-sized publishers are seeking to increase automation in varying research integrity areas but are grappling with challenges related to overall operational efficiency.
More than 50% of survey participants reported an increase in time spent reviewing papers for ethical concerns, and more than 60% expected their publishing organisation would seek to further automate research integrity checks over the next two years. Among automation focus areas were “AI-generated content detection” and “plagiarism detection,” chosen by over 60% of respondents, and “authorship verification” and “AI-generated image detection,” chosen by over 50%. When asked about challenges with their publishing technology, more than half of the respondents indicated “budget constraints” and “human resource constraints.” Other leading challenges were “integrating with external platforms/tools” and “lack of technical support.”
The survey results echo concerns expressed during the 2025 EASE Conference session “Research Integrity Checks: the Promise and Perils of New Tech and How to Find a Balance,” where panelists discussed strategies to help smaller publishers stay on top of expanding research integrity checks while facing real resource constraints.
Offering recommendations for smaller publishers considering new research integrity screening tools, the panelists emphasized that decisions go beyond the ticket price of software options to weighing the potential benefits of catching more compromised papers versus the time costs spent assessing submissions flagged as possibly having ethical issues. Most research integrity check tools provide scores or gradients of concern; so how well they can be calibrated and how well editorial teams are prepared to use them will make all the difference in whether they prove to be time savers or new sources of workflow strain.
During the EASE session, Kim Eggleton, Head of Peer Review and Research Integrity at IOP Publishing, discussed the importance of publishers trialing different solutions to determine if, when, and how to apply them to their workflows. Among the primary considerations she raised were: determining the most efficient points in workflows for integrity check automation and whether to run checks on every manuscript or on an as-needed basis. The latter ties into budgeting for metered solutions. She noted vendor transparency around how tools work, so publishers can develop targeted plans for when and how to run checks and act on warnings, and a willingness to offer flexibility in screening options is critical to help teams get their workflows right, find the best-fit solutions for them, and reap the most value from technology.
Facilitating flexible solutions development and implementation
Ultimately, small and mid-sized publishers need software they can leverage with their current organisational capacity. With market consolidation, we’ve seen publishing systems become increasingly enterprise-focused, which can hinder process innovation at organisations that don’t have the resources to take on complex system customisation, whether internally or through vendors. The findings of the “Technology Needs of Small and Medium Journal Publishers” survey indicate that many smaller publishers may be grappling with this reality.
As Ithaka S+R’s report “The Second Digital Transformation of Scholarly Publishing” recommended, now is a time for “smaller and midsize publishing organisations, looking across organisational models and governance types, [to] find their collective voice in the enterprise publishing systems marketplace, to ensure that they are advocating effectively for their long-term strategic interests.”
All of that is not to say that smaller publishers don’t need enterprise-level system capabilities, but rather that such options are only as useful as they are accessible. There is an opportunity for vendors to partner with smaller publishers to develop software that can scale with publishers’ needs, helping them operate more efficiently and tackle research integrity checks in targeted ways. It’s also an opportunity to build better solutions overall. When product roadmaps are designed for a small number of large clients, it can result in bloated base systems and Frankenstein custom instances that divert providers’ resources and attention from the greater value proposition they initially set out to offer (i.e., simplify workflows, not create a new form of system maintenance work). These are technical, product, and marketing considerations.
Where niche tools emerge as the best solutions to solve problems, platform providers can support publishers by facilitating streamlined integrations with them. As noted in the “Technology Needs of Small and Medium Journal Publishers” report, software providers offering “self-serve” integration options that users can configure within applications is one way to reduce the need for custom development, which can exacerbate the budget and human resource constraint challenges expressed by survey participants.
Looking at the bigger picture
There’s no question that new technology has heightened the potential for publishing malpractice. Recent rhetoric can frame the present moment as an arms race between humans and the rising AI robots. However, it’s also an opportunity for all stakeholders to critically consider the motivation behind the problem. Is the emergence of more complex research integrity scams the disease, or are they a symptom?
In an Inside Higher Ed article about the rise of research integrity tools, Ivan Oransky, a medical researcher and co-founder of Retraction Watch, said, “anything that can be done to filter out the junk that’s currently polluting the scientific literature is a good thing.” He qualified his statement with, “this is just more evidence that people are trying to shove far too much through the peer-review system.” He added, “that wouldn’t be a problem except for the fact that everybody’s either directly—or implicitly—encouraging terrible publish-or-perish incentives.”
The punchline: We need technology to defend against research integrity risks, but let’s ensure the machines are working in concert with initiatives to address systemic issues. Lest we double down on solutions that reactively rather than proactively solve problems.
Scholarly societies and academic institutions that operate journals independently are the closest publishers to research communities, having deep roots in the disciplines they serve and an understanding of the broader systems that influence researcher behaviour. They are critical stakeholders in developing the future of both human publishing standards and publishing software.
Danielle Padula is Head of Marketing & Community Development at Scholastica
