Measuring Usage in the Age of AI

Tasha Mellins-Cohen outlines COUNTER Metrics’ best practice guidance for usage metrics associated with generative and agentic AI
For more than two decades, the COUNTER Code of Practice has served as the global standard for measuring and reporting normalised content usage metrics. However, the rapid rise of generative and agentic artificial intelligence (AI) is fundamentally changing how researchers discover and consume information.
The data shift
Traditional engagement metrics are facing a structural shift. AI answer engines and summarisation tools can extract insights from peer-reviewed research without ever sending a user to a publisher website. This creates a value gap: published content creates significant value for the researcher, their institution, and their funder, but generates zero site visits for the publisher.
It wasn’t until libraries around the world started pulling their 2025 COUNTER reports that we had good evidence for the zero-click conundrum. When comparing 2025 with 2024, libraries started reporting major changes in usage patterns. At the same time as publishers were reporting significant spikes in raw usage, they and their library customers were seeing a noticeable decrease in human usage in COUNTER reports.
Rather than organic human engagement, we originally thought that the spikes in raw usage data might indicate agentic activity, either from publishers’ own tools, third-party systems, or even researchers’ custom-built agents. Further investigation backs up that supposition. Libraries that licensed AI tools, like (but not limited to!) ProQuest’s Research Assistant or Google’s Gemini, seemed to be experiencing larger COUNTER usage drops than libraries who still encouraged students and faculty to use their licensed content directly.
Do we still need normalised metrics?
COUNTER’s mission is to bring the knowledge community together around a standard that ensures usage metrics are consistent, credible, and comparable across platforms. As an open standard, we’re signatories to the Principles of Open Scholarly Infrastructure. That means if we’re no longer needed, we will take responsible steps to transition or wind down our operations.
Within the context of AI-driven behavioural shifts, we needed to ask if COUNTER is still useful. Do libraries, consortia, publishers and technology providers need us in a zero-click environment? The answer was a resounding YES. If anything, in our new AI world there is an even greater need for normalised metrics.
Are publishers selling content, or services? Who are they selling to? Should a library invest in content when usage is mediated by an AI agent rather than a direct human visit? Should libraries redirect their investments into AI services instead? What’s the return on investment in those scenarios? How can we be sure we’re comparing AI usage fairly for different services and platforms? How can we prevent AI usage from swamping traditional human activity, making it impossible to distinguish genuine research engagement from automated processing? And of course, how can we protect academic freedom to research by separating user behaviours from usage reporting?
It turns out that COUNTER might be more necessary than ever before. But to address these new questions, we had to look again at the Code of Practice and its traditional human-centric usage metrics.
Developing AI usage guidelines
Release 5.1 of the COUNTER Code of Practice, which came into effect in January 2025, was approved for publication at the end of November 2022 – just weeks before ChatGPT hit the market. There was no way for us to predict how widespread the impacts would be on the scholarly communications industry, nor to know how they might be measured, so we proceeded with R5.1 as originally planned.
By spring 2025 most publishers had implemented the updated Code, and it was time to tackle the issue of AI usage. Our Advisory Committee spun up a working group to develop best practice guidelines, with representatives from publishers, technology providers, and libraries.
Following extensive community consultation on the draft version, including with big AI developers, COUNTER published new best practice guidelines on generative and agentic AI usage in April 2026. You can read them in full at Best Practice on Generative and Agentic AI usage metrics
The best practice introduces several key extensions to the Code, focused in two areas.
- Defining the “Agent”
We’ve historically treated all bots the same way, and required that all bot usage be excluded from COUNTER reports. That meant our first step had to be distinguishing between malicious bots and crawlers (which must still be excluded!), real humans, text and data mining, and AI systems. Under our new guidelines, AI usage can and should be reported using the new Access_Method “Agent”. This allows reports to split out usage by AI from “Regular”, human usage, and from text and data mining activity, “TDM”.
- Dedicated AI Metric Types
To allay completely valid fears that AI usage could swamp or skew usage patterns, we introduced separate AI metrics.These new metrics reflect the idea that AI tools use Chunks (100-300 word text strings) rather than full Items (journal articles, for example). However, we’ve kept the distinction between access only to metadata, versus the higher value that can be derived from full-text content.
Just as Total and Unique Item Investigations track human usage of metadata describing a piece of content, Total and Unique AI Investigations count AI usage of metadata Chunks. And where Total and Unique Item Requests track human usage of full text content, Total and Unique AI Requests count when an AI tool is authorised to use the full text of a piece of content.
We also created a new AI Responses Generated metric to track the number of times an AI tool returns text in response to a user prompt. It’s effectively an AI variant of the existing Searches Platform metric.
Looking ahead
As pleased as we are to have published the best practice, the work doesn’t stop here. There are some caveats that you need to keep in mind:
- We don’t know how quickly publishers will be able to implement the new metrics, though it’s unlikely to be much before the end of 2026 at the earliest. If you’re a librarian looking for AI usage metrics to address the data shift, you’re going to have to be patient!
- There’s a high probability that as publishers implement the guidelines, we’re going to discover things that we got wrong. One of the reasons for creating a best practice, rather than going straight for a new release of the Code of Practice, is that best practices can be amended a little more flexibly than the Code itself.
- These initial guidelines primarily apply to AI tools embedded on publisher platforms. If you’re looking for usage metrics from third parties like Google Gemini, we can’t help you just yet.
So yes, work continues! Phase two will focus on third-party and off-site tools, and we’ve had great feedback and engagement from scholarly AI developers like Scite and Consensus, as well as intermediaries like Cashmere. Between the existing AI guidelines and our best practice on syndicated usage, we think we can develop practical reporting mechanisms for these newer players in scholarly communication. Hopefully they will be a little quicker than phase one.
We’d love to hear from people who are working on AI usage. Whether you’re a librarian, a publisher, or a tech bro (or bro-ess), please get in touch.
Tasha Mellins-Cohen is Executive Director at COUNTER Metrics
