96% of researchers say AI will be used for misinformation
Shutterstock.com/Tatiana Shepeleva
A survey of 300 corporate researchers in industries including pharmaceuticals, life sciences and chemicals has been published by Elsevier. The 2024 Elsevier Attitudes on AI Report explores how corporate researchers feel about the use of Artificial Intelligence (AI) and Generative AI (GAI); finding that more than a third (38%) have already used AI for work purposes and three quarters (76%) expect to use AI within the next two to five years. The survey is part of a larger Elsevier study of the attitudes towards AI of 3,000 researchers and clinicians across 123 countries.
Overall, corporate researchers feel positive about the adoption of AI. The vast majority (96%) think AI will accelerate knowledge discovery and 71% say the impact of AI in their area will be transformative or significant. Importantly for commercial R&D organizations, most respondents also believe AI will realize cost savings for businesses (93%), increase work quality (87%), and free up time to focus on higher value projects (85%).
However, the findings also reveal that despite positive sentiment towards the adoption of AI, corporate researchers also have several concerns about the risks surrounding its use or misuse. These include the belief that AI will be used for misinformation at least to some extent (96%), that AI will cause critical errors (84%), and that AI will lead to weakened critical thinking (86%).
“Researchers in fields from pharmaceuticals to biotechnology to chemicals are clearly telling us they have an appetite for AI to aid their work, but that ethics, transparency and accuracy remain paramount,” commented Mirit Eldor, Managing Director, Life Sciences at Elsevier. “We know from conversations with our customers that trust in data quality and provenance is critical to how they view AI’s ability to augment their R&D approach. This report has highlighted the steps that must be taken to embed confidence in the AI tools of today and tomorrow. Scientists and R&D leaders have relied on Elsevier for quality content and data that powers their research for 150 years, and we look forward to continuing to support customers to use AI effectively and responsibly as the technology develops.”
At an organisational level, corporate R&D functions are aware of the risk of “shadow AI” – the unsanctioned use of AI. More than half (55%) of respondents are prohibited from uploading confidential information into public GAI platforms, and 29% are prohibited from using public GAI for certain purposes. This is not surprising given that industries like life sciences and chemicals are heavily regulated. Moreover, the nuance of scientific questions in disciplines such as drug discovery demands a level of precision that public GAI tools are unable to deliver.
Accuracy and transparency in these sectors are critical, and corporate researchers are clear on what is needed to increase trust in AI/GAI tools. A significant majority (91%) expect GAI dependent tools’ results to be based solely on high-quality trusted sources and 60% say ensuring the confidentiality of inputs would increase their trust in that tool. To deliver on these needs, organizations require domain-specific GAI fine-tuned on high-quality, verified internal and external data to enable precision in research.
“The power of AI will be unleashed when organizations are able to integrate reliable scientific data with secure computational ecosystems, to build intelligent and specialist AI applications that solve scientific problems,” continued Mirit Eldor. “Elsevier meets this need by operating at the intersection of data, technology, and applied science to deliver trusted solutions that drive discovery. We are working closely with our customers to continue developing explainable AI that draws on peer-reviewed content, extensive curated data sets, and sophisticated analytics that powers life-changing research.”