The medium is the model: AI as a new environment for research

Christopher Kenneally

Christopher Kenneally argues that AI is not just a tool, but a medium reshaping research culture, authority and communication

“The medium is the message.”

When the Canadian philosopher Marshall McLuhan coined this phrase in 1964, he reframed how societies understand technology. A professor of Communication and Culture at the University of Toronto, McLuhan argued that the most profound effects of media do not lie in their content, but in how they reshape perception, behaviour, and social organisation.

That insight has lost none of its force. In fact, it may be more relevant now than at any point since the rise of electronic media. Artificial intelligence is rapidly becoming embedded across the research lifecycle – from discovery and authorship to evaluation, dissemination, and impact assessment. Yet debate about AI in research still focuses overwhelmingly on outputs: accuracy, hallucinations, bias, productivity gains.

McLuhan would have warned us that this fixation on content is precisely the problem.

The invisible medium

McLuhan likened humanity’s relationship with media to a fish in water. The fish has no awareness of the water that sustains it. “We do not know who discovered water,” he said, “but we know it wasn’t the fish.”

AI is fast becoming our water. Once confined to specialist domains, AI systems now mediate everyday research practice. They summarise papers, draft grant applications, recommend citations, rank journals, assess impact, and increasingly shape what is visible – and invisible – in scholarly communication.

For McLuhan, a medium is a technological extension of the human body. Shoes extend the foot; roads extend movement. In this sense, generative AI is an extension of the human intellect. It does not merely assist thinking, it reorganises it.

From message to massage

McLuhan famously embraced ambiguity. His book The Medium Is the Massage owed its title to a typesetting error. Rather than correct it, he insisted it stay. The phrase captured a deeper truth: the media works us over. They condition habits, pace, authority, and attention.

AI “massages” the research system in similar ways. It rewards statistical plausibility over originality. It shifts authority from individual expertise toward probabilistic consensus generated at scale.

These effects are not accidental side-effects. They are structural properties of the medium.

Media create environments

Every dominant medium creates a cultural environment. Print privileged linearity, sequential logic, and fixed points of view – conditions under which modern science flourished. Electronic media, McLuhan argued, reversed this dynamic, restoring simultaneity and participation while destabilising hierarchy.

AI introduces another rupture. It is not simply a faster tool layered onto existing workflows. It reconfigures authorship, originality, and evaluation; the core currencies of research culture.

When scientific articles and press releases are increasingly written to be parsed, ranked, and summarised by machines rather than read closely by humans, the environment of scholarship changes. Visibility, not validity, becomes decisive. Style converges. Novelty risks being statistically marginalised.

Innovation without understanding

McLuhan was often caricatured as a technological enthusiast. In reality, he was deeply cautious. Trained as a literary scholar, he warned against what he called “haphazard innovation” – change adopted without understanding its systemic consequences.

“Inventions,” he wrote, “have extended man’s physical powers rather than the powers of his mind.” That distinction no longer holds. AI directly extends cognitive capacity, and in doing so, challenges the assumptions underpinning peer review, attribution, accountability, and trust.

For McLuhan, understanding a medium functioned like a thermostat: not stopping change, but regulating it. Applied to AI, this suggests that governance focused solely on ethics checklists or model transparency is insufficient. What is required is medium-level literacy: an understanding of how AI reshapes the research environment itself.

Tools that reshape thought

Human evolution has always been shaped by tools. Early stone implements did more than enable survival; they altered cognition, social learning, and culture. As archaeologists have shown, toolmaking helped shape the structure of the human brain.

AI belongs in this lineage; like earlier technologies, it promises gains while introducing disruption. The question is not whether disruption will occur, but whether research institutions are prepared to recognise and manage it.

Today’s essential research device is the smartphone – a tool that fragments attention and compresses complexity. AI builds on this environment, favouring speed, scale, and synthesis over slow, contested understanding.

Writing for machines

By the middle of this decade, much scholarly communication is no longer written primarily for human readers. It is written for indexing systems, ranking algorithms, and generative models that summarise research for downstream consumption.

This represents a profound shift. When machines become the primary interpreters of scholarship, the underlying structure of communication matters more than any individual text. What is rewarded, amplified, or ignored is determined by model architecture and training data – often opaque and external to the research community.

McLuhan would have recognised this moment. A new medium always produces “unseen environments” before societies learn how to perceive them.

Turning off the buttons

McLuhan once remarked that he opposed innovation not because he feared change, but because understanding was the only effective form of resistance. “The best way to oppose it is to understand it,” he said. “And then you know where to turn off the buttons.”

AI now confronts research with precisely this challenge. It is not enough to ask whether AI tools are accurate or efficient. The harder question is how this medium reshapes authority, incentives, and knowledge itself.

“It is the business of the future to be dangerous,” McLuhan observed. If that is so, then the responsibility of the research community is clear: to understand AI as a medium, not merely a tool, and to shape its environment before it shapes us.

Christopher Kenneally is the host and chief correspondent for The Spoken World, a podcast series covering audiobooks and audio publishing

Keep up to date with all the latest industry news and analysis – SUBSCRIBE to the Research Information Newsline!

Back to top