Three key themes on artificial intelligence

Share this on social media:

Verena Weigert

Verena Weigert reports from Jisc’s Research and Innovation Sector Strategy Forum Workshop on AI and Research

Artificial intelligence is making waves in nearly every industry and sector, and research is no different; its impact on the design and management of the research system appears likely to become more pronounced in the coming years. The rapid advancement in the development of new AI tools presents opportunities for innovation and raises questions about how responsible use of these tools looks like in research.

It is a key time for research organisations to discuss ways AI might change, and enhance, the research and innovation sector.

Jisc recently organised a workshop to discuss AI in research, its management and leadership with our Research and Innovation Sector Strategy Forum of UK Deputy and Pro-Vice-Chancellors and Principles for Research and Innovation. The forum is a vibrant community, with representatives from a diverse range of UK institutions and reflect the views of senior managers and researchers in universities.

Fifteen pro-vice-chancellors and principals for research from all four UK nations met to discuss how AI might change the research and innovation sector, and how AI applications could be used in research and research management and the implications for researchers and research professionals.

Three key themes emerged from the conversation – the effect of AI on research practice, AI as a tool for researchers, and the possible opportunities and challenges that AI in the research sector will bring.

We need a greater understanding of AI in research practice

Forum participants emphasised that to fully realise the benefits of AI in research, we need confidence that AI is being deployed appropriately and ethically. Integrity, transparency and accountability need to be designed into the use of the technology to preserve trust in research.

Dr Jennifer Chubb, a sociologist at the University of York with a research focus on the role of responsible storytelling and ethical development of AI, highlighted that we must increase awareness of the effect of AI on research practice: “There is need for a greater understanding of the effect of AI on researchers and their creativity. Studies of the role of AI in research need to ask fundamental questions about how the technology might provide new tools that enable scholars to question the values and principles driving institutions and research processes.”

AI could make researchers’ lives easier

Professor Nick Plant, Deputy Vice-Chancellor: Research and Innovation at the University of Leeds, said he hoped that: ‘AI could help to free up time for researchers to focus on the creative and collaborative aspects of their work and help to get back to the roots of what it means to work in academia’. 

Our workshop participants welcomed the potential AI has to be an enabler of new processes. They also reflected on its effect on research culture and whether it might create unsustainable metrics that disadvantage researchers. There is a need for the appropriate use of AI tools and for assurance and ethics at an individual as well as institutional level. They highlighted the fact that AI applications could help with tasks such as processing grant applications, help with research data management, support for evaluation, demonstrating impact and financial reporting and data centre capacity management to name a few.

Our workshop participants welcomed the potential AI has to be an enabler of new processes. They also reflected on its effect on research culture and whether it might create unsustainable metrics that disadvantage researchers. There is a need for the appropriate use of AI tools and for assurance and ethics at an individual as well as institutional level.

Professor Maria Delgado, Vice Principal (Research and Knowledge Exchange) at the Royal Central School of Speech and Drama, University of London was mindful about some of the language around AI, saying: ‘We should focus on different ways to navigate knowledge rather than highlight how AI can speed up tasks. Faster is not necessarily better and might disadvantage groups at particular career stages or in different disciplines, with possible implications on integrity and inclusivity."

Clifford Lynch, Director of the Coalition of Networked Information (CNI) who offered a US perspective on the use of AI in research, added: "The development of a national network of cloud labs is an important trend that complements AI in research." He pointed out that Carnegie Mellon University for example, was the first university to build a cloud lab in an academic setting designed to automate lab experiments with robotics and AI at an institutional scale.

Opportunities and challenges for institutions

Universities have been taking steps to consider what the use of AI in the research process means for their institutions.

Bella Abrams, Director of Information Technology at the University of Sheffield, highlighted that it is important ‘to openly acknowledge the sustainability issues related to AI’. While it can help with climate protection, she said, the energy demand and carbon emissions of some AI models that are trained with huge amounts of data is vast. With a better understanding of how much energy AI systems consume, institutions could decide what trade-offs they would like to make. There could be questions about the societal benefits of research with a high climate impact in the future.

Professor Matt Bellgard, Pro Vice-Chancellor (Impact and Innovation) at the University of East London was interested in using AI to support institution-wide research data management and to potentially ‘capture real time data on the research process along the research journey to identify the areas of support and training needed at each stage of the research lifecycle’.

Next steps

Over the years, strict ethical guidelines have been developed among others for research collecting data from human participants and researchers now need to make decisions on the appropriate use of AI tools to meet those and other standards. The forum raised a need for guiderails for higher education institutions to ensure the responsible, ethical and efficient use of AI technologies in the research process.

The University of Strathclyde, for example, has recently launched a project to help researchers and their institutions make informed decision on how they use generative AI with participant data to project the privacy of the essential people who participate in research.

Many forum members were also in favour of considering the opportunities AI brings as a mechanism to think differently and to innovate aspects of the research system as a whole: for example, to explore how it could help to create new as yet undefined innovative scholarly publishing models which ensure research security and trust to enable a leap forward in thinking about scholarly publishing.

Jisc’s Research and Innovation Sector Strategy Forum will continue to meet regularly to discuss the future benefits and challenges facing the research sector and to help shape our next steps for AI in research.

Verena Weigert is Product and Portfolio Manager (Research and Innovation Sector Strategy) at Jisc.