Empowering researchers by bridging the AI trust gap

Purpose-built AI tools – that draw on trusted content, provide trust and transparency – are the way forward, writes Maxim Khan
I enjoy speaking with researchers, librarians and other stakeholders across academia. They’re always thought provoking conversations that I learn from. Last year, a consistent theme emerged: AI feels like it’s everywhere, whereas confidence in AI is not.
Why
Global research and higher education rests on a long-standing agreement: governments and funders invest in universities; universities convert that investment into talent, knowledge, and innovation; and society benefits through economic growth, better healthcare, and societal progress. This agreement is under strain.
Universities and researchers face pressure to align with society’s priorities, demonstrate real-world impact and operate more efficiently, in return for receiving public investment. They have to do this while navigating an increase in the volume and complexity of research, and misinformation and unethical practices are on the rise. Only research that demonstrates value and stewardship of resources will secure continued support.
How can technology and AI help?
Technology, and generative AI in particular, is widely viewed as a critical enabler that can help with some of the key underlying challenges. While researchers recognise AI can help them, they do not trust generic AI tools. The majority of researchers believe that AI can transform education and knowledge creation, 84% of researchers say they have used AI tools in their work, but only 22% trust them. This gap between AI adoption and trust raises an important question: what creates this trust gap and how can we close it?
What is the trust gap?
The problem lies in concerns with generic AI tools that can appear as ‘black boxes’: their algorithms can be opaque, what’s in the underlying corpus and how it’s used can be unclear, the corpus can miss key peer-reviewed research, and it can be hard to “dig behind the output” or see the output in context. As a result, researchers cannot always calibrate their level of confidence in the outputs the tools provide. This isn’t well aligned in spirit with research, where transparency and explainability are important. The stakes are high: these tools risk disempowering researchers, and lead to conclusions based on shaky grounds that can ripple out across broader research, as well as broader society.
How do we close the trust gap?
Tools that use AI can play an increasingly important role in better addressing the needs of researchers across their entire workflow, providing a responsible approach is taken to the use of AI, including a thoughtful approach to its grounding and how the tools can empower researchers’ creativity and critical thinking. Consider the growth of scientific output: millions of papers published annually, across thousands of journals, which makes it very challenging for researchers to keep pace. AI can sift through this sea of data, identify patterns and gaps, and surface insights that help researchers calibrate the strength of the evidence as they draw out connections and conclusions to build on.
Closing the trust gap is about developing tools together with the community, that put researchers in the driver’s seat, empower their creativity and support their critical thinking. We think this includes:
- Working deeply with the research community: AI tools for researchers should be built with deep knowledge and understanding of the ways of working and needs of the research eco-system.
- Ensuring transparency of the corpus and trust in how it’s used: visibility in what is in the corpus, including peer-reviewed research, trust in how evidence is weighed in producing insights and full provenance of what is used.
- Giving insights that spark curiosity and empower critical thinking: ability to dig behind the insights, put them in context across multiple perspectives, and critically evaluate the strength of the insights.
When approached in this way, AI can play a central role to support the future role of research, including interdisciplinary research that addresses some of society’s biggest challenges.
We’ve taken exactly this approach when we created LeapSpace together with the research community. LeapSpace uses curated, peer-reviewed content, including full-text content from the world’s leading publishers and societies, ensuring critical research isn’t missed. AI-generated insights include references and have ‘Trust Cards’ which provide context and transparency into every result, helping researchers calibrate the strength of the evidence as they draw out connections and conclusions to build on.
We believe purpose-built AI tools for research – that draw on trusted content, provide trust and transparency at every step and have the clear goal of empowering researchers’ creativity and critical thinking – are the way forward. Only then can we harness the full potential of AI, in a way that builds trust and accelerates the impact of research.
Maxim Khan is Senior Vice President A&G Solutions, Elsevier
