Have we reached the limits of altmetrics?

Share this on social media:

AI increases both the risk of deliberate metric manipulation as well as the overall noise in the scholarly publishing system, writes David Stuart

It has been fascinating to watch the growth of altmetrics and other web metrics over the past 20 years. They have gone from being an idle curiosity to being embedded in the web pages of some of the biggest universities and scholarly publishers around the world.

Technological developments, standardisation, and a growing interest in alternative metrics means that information about the number of views or social media mentions a publication has received is often boldly heralded alongside other more established metrics.

Past success is not an indicator of future growth, however, and the web is constantly changing. The biggest disruptor of the web today is undoubtedly artificial intelligence (AI), and while the full extent of its future impact is currently unknown, one question it does raise is whether we have reached the limits of altmetrics.

The allure of metrics

In a rapidly changing world metrics make an appeal to objectivity. Whether they are being used for evaluation, motivation, or celebration, they offer the promise of something more solid than a mere individual’s opinion. For a long time in scholarly publishing the dominant metrics have been based on citations, and while the limitations of citation analysis are widely recognised, it is nonetheless often felt that by aggregating and normalizing citations in different ways they can indicate something of value.

The shift to online publishing, however, vastly increased the metrics that were available. The rich variety of real time data on a publication’s use, and the rise of altmetrics from social media, promised to provide a faster and more nuanced understanding of the impact that research was having, beyond the academic realm, and beyond formal publications.

Informal publications are, however, more open to manipulation than formal publications, and it seems that just as altmetrics are beginning to gain wider acceptance, their legitimacy could be undermined. Up to now the problems of manipulation have been minimal, but with AI the risk is that such informal metrics could quickly become meaningless.

The challenge of AI

AI increases both the risk of deliberate metric manipulation as well as the overall noise in the scholarly publishing system.

All metrics are open to manipulation, and there is an inevitability that individuals and organizations will be nudged to act in ways that gain the most favourable impression. As Goodhart’s economic law puts it, “When a measure becomes a target, it ceases to be a good measure.” While the negative consequences of metrics in scholarly publishing may not be as immediately apparent as when a doctor only wants to take patients with favourable outcomes, or the police only record crimes that are easily solved, an overemphasis on metrics nonetheless risks undermining the system. The problem of paper mills churning out research articles that are retracted later has also been growing recently, and the problem is only likely to become greater as improvements to generative AI make it increasingly difficult to identify such papers. Within scholarly publishing, however, there is still a lot of human checks, few of which apply to altmetrics.

The history of web metrics has been one of both the increasing ease with which metrics can be gathered and with which they can be manipulated. When I first started counting objects on the web, almost 20 years ago, the unit of choice to be counted was the hyperlink. While anyone could have created multiple web sites with different domains to increase the impact of their online presence, the cost in time and money, and the little interest in web metrics in the academic community, meant that most people’s time would be better spent creating better scholarly publications. The standardisation of the big social media sites, however, provided both greater interest in web metrics, and also lowered the barrier to creating an online impact.  It was no longer necessary to create multiple web sites, just different profiles on multiple social media platforms. The cost had, for the most part been reduced to zero, all that was required was time. With AI, however, the time involved may also be reduced to zero, at least after the initial set up.

It’s not hard to imagine a point in the near future when if you want to generate a hundred or even a thousand microblogging accounts to wax lyrical about the quality of your research, you can simply ask a generative AI program to do it on your behalf. It won’t be necessary to carefully curate each account’s image to distinguish them from spambots, rather the content will be generated automatically. You may be slightly annoyed to find the artificial insights gain more followers than your carefully curated posts, but the rapidly increasing attention score will undoubtedly ease the pain.

Not all the noise will be deliberate either. As content is increasingly created automatically, the idea that pieces of content can be counted as though each was created as an individual act by a human will become increasingly flawed. Already increasing proportions of the web come with a warning that ‘this page has been created automatically with the help of AI’, and such sections will inevitably grow faster than the human created portions. As content creation is increasingly based on other content that has already been created, the advantage of being the first article mentioned on a subject will become increasingly difficult to overcome.

With an increasingly skewed distribution of attention it also becomes increasingly important to ensure that mentions are being associated with the correct scholarly document. The misattribution of scholarly mentions, and the resulting fluctuations in metrics, are only likely to increase as more data is generated automatically.

The end of altmetrics?

At first glance it would seem as though the rapidly increasing use of generative AI will inevitably herald the end of altmetrics, but that’s only if everything else remains the same. Undoubtedly it won’t.

Social media sites and services are also likely to change in many ways. It may be that free social network sites are eschewed in favour of subscription or distributed services, where there are greater restrictions on the generation of content, enabling them to form the basis of new and more robust metrics. Interest in the scholarly impact on a generic service such as X, may be replaced by interest in verified accounts alone, or in selected servers that form part of a distributed network.

It also seems likely to broaden interest in web metrics beyond altmetrics. There’s a wide range of insights that can be gathered from the web, or corners of it, from analysing what people’s search activities on Google Trends tell us about the state of society to how linking between web sites provide insights into real-world relationships and the robustness of a local economy. Too often, however, these alternatives have been marginalized with a focus on large-scale evaluative metrics.

Conclusion

Web metrics have always had to adapt as new technologies have emerged and while AI will undoubtedly bring one era of altmetrics to an end, new areas of investigation will undoubtedly emerge. The inclusion of a host of grey literature and patent citations that would previously have been excluded from bibliometrics will likely continue, but the idea that it can meaningfully capture informal content at scale beyond that may be increasingly dubious.

Web metrics has always had softer foundations than bibliometrics, more likely to illicit insights that are interesting rather than authoritative, and it may not be a bad thing if that is as far as they go. Evaluative metrics often have a negative impact, especially when they are given too much credibility, so reining in altmetrics just as they are gaining interest is not necessarily a bad thing.

• David Stuart is author of the recently published Web Metrics for Library and Information Professionals (2nd Edition), which shows that that there is much more to web metrics than altmetrics. https://www.facetpublishing.co.uk/page/detail/web-metrics-for-library-and-information-professionals/?k=9781783305667