Cambridge launches AI research ethics policy

Share this on social media:

CC BY-SA 4.0 D Wells

New guidelines from Cambridge University Press will help researchers use generative artificial intelligence (AI) tools like ChatGPT while upholding academic standards around transparency, plagiarism, accuracy and originality. 

The rules are set out in the first AI ethics policy from the organisation and apply to research papers, books and other scholarly works, including a ban on AI being treated as an ‘author’ of academic papers and books published by Cambridge University Press.

The move is aimed at providing clarity to academics amid concerns about flawed or misleading use of powerful large language models like ChatGPT in research, alongside excitement about its potential.

Mandy Hill, Managing Director for Academic at Cambridge University Press & Assessment, said: “Generative AI can enable new avenues of research and experimentation. Researchers have asked us for guidance to navigate its use. We believe academic authors, peer reviewers and editors should be free to use emerging technologies as they see fit within appropriate guidelines, just as they do with other research tools.

“Like our academic community, we are approaching this new technology with a spirit of critical engagement. In prioritising transparency, accountability, accuracy and originality, we see as much continuity as change in the use of generative AI for research. It’s obvious that tools like ChatGPT cannot and should not be treated as authors.

“We want our new policy to help the thousands of researchers we publish each year, and their many readers. We will continue to work with them as we navigate the potential biases, flaws and compelling opportunities of AI."

R. Michael Alvarez,Professor of Political and Computational Social Science at the California Institute of Technology, said: “Generative AI introduces many issues for academic researchers and educators. As a series editor for Cambridge University Press, I appreciate the leadership the Press is taking to outline guidelines and policies for how we can use these new tools in our research and writing. I anticipate that we will be having this conversation about the opportunities and pitfalls presented by generative AI for academic publishing for many years to come.”

Professor Alvarez and his Caltech collaborators use AI, including LLMs, to detect online harassment, trolling and abusive behaviour on social media platforms and in videogames such as Call of Duty, as well as to combat misinformation. Alvarez is co-editor of Quantitative and Computational Methods for Social Science, published by Cambridge University Press.

The Cambridge principles for generative AI in research publishing include that: 

  • AI use must be declared  and clearly explained in publications such as research papers, just as scholars do with other software, tools and methodologies;
  • AI does not meet the Cambridge requirements for authorship, given the need for accountability. AI and LLM tools may not be listed as an author on any scholarly work published by Cambridge;
  • Any use of AI must not breach Cambridge plagiarism policy.  Scholarly works must be the author’s own, and not present others’ ideas, data, words or other material without adequate citation and transparent referencing; and
  • Authors are accountable  for the accuracy, integrity and originality of their research papers, including for any use of AI. 

Related news