AI – “immense potential and unprecedented challenges”
Professor Jörg-Rüdiger Sack discusses how new technologies are transforming academic research
As we enter a new era in academic research, artificial intelligence (AI) is poised to change the way we discover, analyse, and disseminate knowledge. The potential of AI to enhance research capabilities is immense, yet it also brings unprecedented challenges to the integrity of scientific inquiry. As the chair of the Content Selection and Advisory Board (CSAB) for Scopus, I’ve witnessed first-hand how the technology and platforms used for research have evolved over time and therefore it has been a regular subject of our board meetings. AI has the power to be transformative for research, but we must implement it responsibly and remain aware of the emerging threats it poses.
AI’s positive influence on research
The integration of AI into research processes has already begun to yield remarkable results. Tools powered by generative and other forms of AI are accelerating discovery and advancing innovation by sifting through vast amounts of data at unprecedented speeds. This efficiency is not just convenient, but also reshapes the way researchers approach academic discovery. An example of this is the way such tools can enhance productivity. According to a recent McKinsey Global Survey, the adoption of generative AI tools has seen explosive growth, with 40% of organisations reporting increased productivity in at least one business function due to AI[1].
Potential exploitation of AI
However, the ease with which AI can generate convincing yet falsified data has reached alarming levels. Bad actors – paper mills, hijacked journals, and predatory publications – are exploiting AI technology to produce fraudulent research. This proliferation of false information threatens to undermine the credibility of scientific research and erode public trust in research institutions. This is something we can ill afford. Especially as rapidly changing science, unclear messaging and misinformation has begun to eat away at people’s confidence in research.
Researchers are not blind to the risk AI poses to academic research though. Elsevier’s global survey showed that 81% of researchers are concerned that generative AI use in research processes will erode critical thinking. However, 58% say training an AI model to be factually accurate and moral would strongly increase their trust in using a Generative AI tool in research[2].
Taking responsibility
The responsibility to maintain the integrity of research in this new landscape falls on all of us in the academic community. Educators must take the lead in teaching the proper use of AI, emphasising its role as a tool to augment human intelligence rather than replace critical thinking. We must instil in the next generation of researchers a deep understanding of both the capabilities and limitations of AI in scientific inquiry. This also includes training them on the importance of ensuring the data they use that is generated by AI is credible, accurate and up to date, as well as taking accountability in sharing fair research. Academics, publishers and research institutions alike are at the forefront of academic discovery, and it is essential they commit to safeguarding high-quality, trusted content – this is more critical now than ever before. Steps that can be taken to ensure this includes remaining vigilant in identifying journals that spread falsified information and have compromised their quality standards.
There is obviously a balance to strike though, as we must also recognise the immense potential AI holds for advancing future research. A staggering 92% of researchers believe AI will help rapidly increase the volume of scholarly research, while 87% anticipate it will improve overall work quality[2]. These statistics certainly underscore the optimism within the research community about AI’s potential to drive progress.
The future of AI in research: next steps
As we navigate this new terrain, it is crucial that we embrace innovation, while simultaneously safeguarding scientific integrity. The first, and necessary, step we can take towards this is developing robust verification processes that can keep pace with AI’s evolving capabilities. This includes implementing advanced AI-powered tools to detect fraudulent research while simultaneously educating researchers on ethical AI usage.
The future of research in the AI era is both exciting and daunting. As technology continues to advance at breakneck speed, we must adapt our practices and policies accordingly. The global AI market is expected to reach a staggering $407 billion by 2027[3], indicating the influence AI will have across all sectors, including academic research.
In conclusion, as we forge ahead into this new frontier, the research community must unite in its commitment to harnessing AI’s potential while fiercely protecting the integrity of scientific inquiry. Only by doing so can we ensure that the transformative power of AI in research leads to genuine advancements that benefit society as a whole.
Professor Jörg-Rüdiger Sack is chairman of the Scopus Content Selection and Advisory Board (CSAB)
References:
[1] McKinsey & Company, “The state of AI in 2023: Generative AI’s breakout year,” August 1, 2023
[2] Elsevier AI Report 2024
[3] CompTIA, “Artificial Intelligence Statistics and Facts,” February 29, 2024
Do you want to read more content like this? SUBSCRIBE to the Research Information Newsline!