Enabling AI while preserving research integrity

Researchers are looking closer than ever at trust signals to mitigate risks and support informed research practices

From Elsevier

AI is rapidly becoming part of everyday research workflows. Nearly 60% of researchers are now using AI in their work, yet trust remains a major barrier to broader adoption (Researcher of the Future — a Confidence in Research Report, 2025). As AI use grows, researchers and institutions are asking an important question: how can AI support research without compromising integrity? 

As a result, researchers are looking closer than ever at trust signals. These signals can be preserved within AI-enabled workflows when tools are selected and used responsibly, helping to mitigate risks and support informed research practices.

Signals researchers trust

Researchers continue to rely on visible, proven integrity signals when evaluating information: rigorous methods, peer review, transparency, and correction mechanisms. These signals remain essential, even as workflows evolve. 

Among them, peer review stands out as one of the strongest indicators of integrity with 74% of researchers considering peer-reviewed research to be trustworthy (Researcher of the Future — a Confidence in Research Report, 2025).

Methodology is equally critical. Nearly eight in 10 researchers (78%) say it is extremely or very important when assessing reliability. Clear methods, accessible data and well-documented processes all reinforce confidence that findings can be trusted.

At the same time, oversight remains critical in protecting the integrity of the publishing process. Publishers are encouraged to take steps identifying a range of suspicious publishing activities and paper submissions. Elsevier employs the latest technology with strict human oversight to protect the scholarly record.  

Beyond proactive measures, retractions and corrections are vital to integrity in research and protecting the credibility of science. These signals demonstrate that established processes are working. In fact, 85% of researchers agree that retractions and corrections are important to preserving the scientific record (Researcher of the Future — a Confidence in Research Report, 2025).  

The publishing community must collaborate to continually improve these processes and maintain confidence in research.

Harnessing AI responsibly

AI can help researchers navigate information overload, accelerate discovery and surface new connections. However, while guardrails can mitigate these risks, AI can also introduce hallucinations, bias, loss of provenance and false citations.  In the larger publishing ecosystem, this can transfer to increased likelihood of retractions, misinformation and decreased trust in the scientific record.

Supporting researchers in this environment requires more than access to tools. It requires selecting technologies designed for integrity and equipping users with the skills to engage with them critically.

Utilising research-grade AI

When conducting research, researchers may assess AI tools against research-grade criteria. For evaluation purposes, research-grade AI refers to AI systems purpose-built for research workflows, grounded in trusted content, designed with responsible functionality, and built to support – not replace – human judgment.

In practice, research-grade AI can be evaluated across three interdependent and overlapping pillars: trusted content, responsible functionality and human-in-the-loop.   

Trusted content features include peer-reviewed research, strong research integrity safeguards and a curated, up-to-date content base. Content should also be obtained ethically, without violating copyright, and with clear attention to provenance, rights and governance.  

Trusted content is crucial to maintaining research integrity outside of AI tools. The same can apply inside AI tools. 

Responsible functionality of an AI tool serves to put trusted content in the right context and with the right guardrails to allow researchers to critically evaluate the response. Features can include citations, transparent reasoning steps and mechanisms that help users assess the strength of claims. Privacy and security features are also relevant considerations in supporting secure research environments. 

These features make AI responses easier to verify and use responsibly, allowing humans to control evaluation and decision-making processes.  Tools that make their processes visible support both research workflows and instruction, enabling users to understand how results are generated and how they can be verified.

Human-in-the-loop means human involvement during the creation, maintenance and use of the tool. When developing and maintaining the tool, responsible AI and design principles should be implemented. These principles are created to ensure further guardrails for developers and users that aim to mitigate risks and support responsible, informed use. 

Human users are also a critical component of remaining “in the loop”. While there can be guardrails in place, users play a key role in evaluating and critically considering tool responses. 

Librarians play a critical role in maintaining this human-in-the-loop model by teaching users how to interpret AI outputs, verify sources and apply critical evaluation frameworks. Their involvement ensures that AI enhances established research practices.

Reinforcing integrity 

Research-grade AI tools can support the integrity signals researchers identify as important. The backbone of trusted content supports rigorous methods such as peer-review and helps surface corrections and retractions in a timely way.

Responsible functionality and human-in-the-loop features can support research integrity efforts by providing transparency. Human users can use transparency of the search process or use of citations to evaluate the merits of a response. 

For libraries and institutions, research integrity now extends beyond evaluating content alone. It also includes evaluating the AI systems researchers increasingly rely on. Tools that obscure sources, lack transparency, or fail to reflect the scholarly record risk weakening trust. Research-grade AI offers a more responsible path forward.

Be first to read the lastest industry news and analysis! SUBSCRIBE to the Research Information Newsline!

Back to top