Cambridge urges licensing safeguards for training LLMs

Cambridge University Press & Assessment has submitted new consultation responses to both the European Union and the Government of India in recent weeks, continuing its push for licensing-based approaches to the use of copyrighted works in training Large Language Models (LLMs).
The latest submissions follow a year of sustained engagement by Cambridge with governments consulting on how copyright should apply to generative AI. Over the past 12 months, the organisation has responded to consultations in India, the EU and the UK, arguing that existing licensing frameworks offer a proven, scalable way to support AI innovation while protecting the creative industries.
In its response to the European Union, Cambridge focused on the practical implementation of the 2024 EU Artificial Intelligence Act, widely regarded as the most comprehensive AI regulation to date. The submission examined current mechanisms that allow copyright owners to opt out of their works being used for AI training, raising concerns that these systems are not yet robust enough to reliably prevent the ingestion of protected content.
In parallel, Cambridge responded to the Government of India’s Working Paper on Generative AI and Copyright. The consultation proposes a ‘Hybrid Model’ under which AI developers would gain access to all copyrighted content through a blanket licence in exchange for a single flat fee, with rights holders receiving a share of the revenue. While the model is intended to simplify licensing, reduce transaction costs and support access to training data, Cambridge argued that it would fail to meet these aims and would undermine the foundations of copyright law, causing harm to India’s creative industries.
Instead, Cambridge advocated for the use of existing collective and individual licensing solutions that are already operating effectively in the market. The response positioned these approaches as better aligned with India’s ‘Orange Economy’, adopted in 2025 to support the country’s creative and cultural sectors.
These recent submissions build on Cambridge’s response in February 2025 to a UK government proposal that would allow AI companies to use copyrighted works for training unless rights holders proactively opted out. Cambridge warned that such an approach would enable large-scale scraping of academic research and learning materials without adequate protection, despite the availability of established licensing models already being used by some AI companies.
Throughout these consultations, Cambridge and other industry bodies, including the Publishers Association, have called for greater transparency over the content used to train LLMs, fair remuneration for rights holders, and clear attribution of source material to support trust in AI-generated outputs.
Mandy Hill, Managing Director at Cambridge University Press and President of the Publishers Association, said: “Many countries around the world benefit enormously, both culturally and economically, from the output of their creative industries. But these industries are threatened by the refusal of big tech companies to adopt existing, effective licensing arrangements for content that has or is being used to train Large Language Models.
“Cambridge supports innovations in AI, responsible licensing, and collaboration with the tech industry. Key to achieving this balance is for governments to uphold the legal foundations of copyright when developing AI policies to avoid damage to the creative industries.”
Arun Rajamani, Managing Director at Cambridge University Press & Assessment, South Asia, added: “Ethical and responsible AI development depends on ensuring that creators and rights holders are appropriately recognised and compensated when their works are used to train Large Language Models. As policy discussions continue, it is important to consider whether proposed approaches achieve this objective in a balanced and effective manner.
“Above all, efforts to support AI development should seek to preserve the rights of creators and rights holders, whose works are integral to the development of high-quality, reliable AI technologies.
“As an education provider and publisher with deep connections across India, Cambridge recognises the potential for responsibly developed AI systems to deliver meaningful benefits to society. Ensuring fair recognition and compensation for creators remains an important component of that process.”
