Study to look at ethical risks of AI in research

Share this on social media:

A new 10-month project led at the University of Strathclyde aims to help researchers and their institutions make informed decisions on how they use generative AI with participant data, in order to protect the privacy of those who participate in research. As part of the work, the team will gauge the views and concerns of University Research Ethics Committees around the UK.

The rapid advancement of Generative AI and Large Language Models (LLMs), such as ChatGPT, Bard, and Claude, presents researchers with exciting opportunities for innovation and efficiency. Generative AI can assist researchers in many ways, from designing data collection tools to generating survey responses and data cleaning to analysis and reporting – but there has been widespread concern over its use.

The project has been awarded £100,000 funding from REPHRAIN, the National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online, and is in collaboration with the University of Edinburgh. 

Professor Wendy Moncur, of Strathclyde’s Department of Computer and Information Sciences, who is leading the project, said: “Generative AI capabilities are impressive and can save researchers time and give new insights. We will help researchers and their universities to foresee and avoid potential pitfalls in its use. 

“These pitfalls include participant re-identification, where we have promised study participants that they will be anonymous yet Generative AI undoes our anonymisation and reidentifies them. Another potential pitfall is when we ask generative AI to make up extra data based on participant data that we already have, and it ‘hallucinates’ – makes up – misleading or even defamatory information about people. Our aim is to enable UK universities to exploit the incredible potential of generative AI, while protecting participants’ privacy and the excellent quality of UK academic research, by understanding and guarding against potential pitfalls.” 

The research aims to help guide research institutions, University Research Ethics Committees, regulatory authorities, funders, including REPHRAIN itself, data custodians, professional organisations, publishers, and advocacy groups in their early encounters with research involving generative AI. The project is informed by the UK Government’s Futures Toolkit, a resource that policy professionals can use to embed long-term strategic thinking in the policy and strategy process. 

The rapid advancement of Generative AI and Large Language Models (LLMs), such as ChatGPT, Bard, and Claude, presents researchers with exciting opportunities for innovation and efficiency. Generative AI can assist researchers in many ways, from designing data collection tools to generating survey responses and data cleaning to analysis and reporting. As with any new tool, however, it needs to be used responsibly. 

A new 10-month project led at the University of Strathclyde aims to help researchers and their institutions make informed decisions on how they use Generative AI with participant data, to protect the privacy of the essential people who participate in research. As part of the work, they will gauge the views and concerns of University Research Ethics Committees around the UK.

The project has been awarded £100,000 funding from REPHRAIN, the National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online, and is in collaboration with the University of Edinburgh. 

 Professor Wendy Moncur of Strathclyde’s Department of Computer and Information Sciences, who is leading the project, said: “Generative AI capabilities are impressive and can save researchers time and give new insights. We will help researchers and their universities to foresee and avoid potential pitfalls in its use. 

“These pitfalls include participant re-identification, where we have promised study participants that they will be anonymous yet Generative AI undoes our anonymisation and reidentifies them. Another potential pitfall is when we ask Generative AI to make up extra data based on participant data that we already have, and it ‘hallucinates’ – makes up – misleading or even defamatory information about people.  

“Our aim is to enable UK universities to exploit the incredible potential of Generative AI, while protecting participants’ privacy and the excellent quality of UK academic research, by understanding and guarding against potential pitfalls.” 

The research aims to help guide research institutions, University Research Ethics Committees, regulatory authorities, funders, including REPHRAIN itself, data custodians, professional organisations, publishers, and advocacy groups in their early encounters with research involving Generative AI. 

The project is informed by the UK Government’s Futures Toolkit, a resource that policy professionals can use to embed long-term strategic thinking in the policy and strategy process. 

Related news