Embracing the alternative

Share this on social media:

Topic tags: 

David Stuart describes how mainstream altmetrics have become, and the challenges the concept still faces

The web is now awash with data suitable for new alternative metrics, or ‘altmetrics’. In an age when there is an increased emphasis on the need to demonstrate the value and impact of scientific research, changes in scholarly communication have led to data that can provide new insights into the influence of research faster than ever before.

Measuring the dissemination and impact of research is no longer limited to the slow emergence of formal citations in journals and books over a number of years, but can also be traced in near real-time as a far wider audience than ever before engages with publications online. Before a journal article is even formally published the preprint can be placed in an online repository and start generating measurable interest. The repository may provide figures on views or downloads, the preprint may be discussed on various social network sites, bookmarked in online reference management software, or quickly form the basis of an experiment detailed in an open notebook.

Associated less formal publications may also be generating their own impact trails at the same time: white papers, blogs, microblogs, open notebooks, data, and computer code; all are now part of public scientific discourse, and each are generating a seemingly never-ending set of metrics.

The availability of altmetrics data has led to the development of a range of new tools and services, publishers are promoting altmetrics across the web, and the annual Altmetrics Conference attracts a wide range of international and cross-sectoral interest.

In more concrete terms, for Altmetric.com, one of the leading altmetric services, the growth in interest has translated into over 34 million API calls for its service each day, and it now has over 350 publisher, institutional and corporate subscribers. As Catherine Williams, chief operating officer at Altmetric.com, describes: 'Researchers and readers almost expect it as standard to see altmetrics on any journal articles, and we’re now seeing increasing adoption amongst book publishers looking to provide additional feedback to their authors.'

The Altmetric badge or ‘donut’, is one of the most noticeable altmetrics features on the scholarly web, with its colourful breakdown of where attention is coming from and its centralised score of attention. Altmetric is not alone in the provision of altmetrics data; there are now a range of other services designed to help researchers and publishers understand and demonstrate the impact of their output. Kudos (www.growkudos.com) combines data from Altmetric.com with publisher data about the clicks, views and shares of an article, and PlumX (https://plumanalytics.com) altmetrics data is incorporated into the Scopus citation index.

But while altmetrics are increasingly prevalent across the web, questions still remain about what they mean, the extent to which they have been embraced by the research community, and how open they are to potential misuse and abuse.

What do altmetrics mean?

Metrics are inherently fascinating: reduce a complex entity to a single number and suddenly we find sufficiently similar entities can be compared, ranked, and judged against one another.

While historians of physics may argue whether Stephen Hawking or Albert Einstein was the greatest scientist, if you accept the h-index as a valid indicator of research impact and Google Scholar as an appropriate citation database then the answer is simple: Einstein has an h-index of 112, compared to 129 for Hawking. Of course understanding what it means to have an h-index of 112 or 129, or how much better 129 is than 112, is less clear. At the most simplistic level an h-index of 129 means that Hawking has 129 documents indexed by Google Scholar, each of which has been cited at least 129 times, but these numbers also reflect the different publishing and citation cultures in different times and places, the length of scientists’ careers, and the time that has passed for the accumulation of citations.

The difference in h-index between the two physicists is likely to be as much to do with the changing publishing environment as anything intrinsic to the value of their work itself, and unnormalised comparisons between different disciplines are all but meaningless. After decades of research we are still discussing what a citation means, and the factors that may influence the number of citations a document receives, therefore how much do we really understand what different altmetrics in different environments really mean?

For Jonathan Adams, director of the Institute for Scientific Information at Clarivate Analytics, recognising that we are in the very early stages of understanding what the numbers behind altmetrics mean is essential if they are to be applied properly.

As he explained: 'Prior to the development of the first citation index we had centuries of development of academic conventions around the nature of publication and referencing of prior intellectual property, it's a well-established pervasive global culture now, with a strong commonality between science, social science, and humanities, in terms of those underlying conventions. Altmetrics has none of those things. Not only have we got new data sources, but we have no well-established cultural criteria for the way in which data is created within those sources.'

Adams also raised the problem of weighting the different types of resources. How much is a tweet worth in comparison to a more thoughtful mention in a blog post or a reference on a question and answer web site?

Weighting was an issue also raised by Andy Tattersall, an information specialist at the University of Sheffield and editor of the book Altmetrics: A practical guide for librarians, researchers and academics (Facet publishing 2016). For Tattersall however, whilst the weighting question is important when the focus is on understanding the big numbers, the small numbers are equally important: 'The most important altmetric number is zero, because zero potentially implies there is no scholarly communication or engagement happening about your research, whereas if it’s 100 or 150 we have to then drill into it to see exactly what that means.'

It has always been necessary in bibliometrics to normalise the results according to recognised disciplinary differences and document type, but as Jonathan Adams pointed out, such normalization is difficult with altmetrics as existing services evolve and new services emerge: 'It's still an evolving data environment, the analytic maturity is lagging even further behind, and we have no widely agreed conventions on how those data should best be treated.'

Professor Mike Thelwall, head of the Statistical Cybermetrics Research Group at the University of Wolverhampton, not only sees it as necessary to normalise indicators for things such as document, field and year differences, but also to be realistic about what can be shown with any metric: 'High scores do not necessarily point to excellent research and low scores do not prove that studies are poor. At the end of the day altmetrics should guide human judgement rather than replace it since they cannot capture all kinds of impacts.'

Whilst new methodologies and techniques, such as artificial intelligence, may go some way to shortening the gap between the evolving data environment and analytic maturity, it is unclear whether such rapidly changing and potentially opaque processes would be readily embraced by the academic community.

Do researchers recognise altmetrics?

Whilst altmetrics are undoubtedly increasingly prevalent on the web, it is important that we don’t confuse institutional and individual use. Publishers and marketing departments may be promoting altmetrics, and researchers will undoubtedly be seeing them, but that does not mean that researchers are necessarily accepting them: 'Probably most academics ignore altmetrics at the moment and probably very few take them at face value,' Thelwall advises, while Tattersall emphasises that if they do see them they 'certainly don't know how altmetrics work'.

Tattersall sees this lack of engagement with altmetrics as part of a wider issue of researcher participation: 'I don't think the majority of academics are actively participating in most of the platforms that generate altmetrics. They're not communicating with the media: they're not using Twitter, they're not using Mendeley, they’re just not using the tools at this stage. Change is happening, it's all progressing that way, but it's not happening overnight, it's not a paradigm shift.'

As Tattersall also pointed, out, part of the problem is also one of time: 'Researchers have to do a tremendous amount of things now, and it's quite a concerning time for them, because the more things you throw in, the less likely they are to produce research, so it's really about getting the balance right.'

The acceptance of altmetrics by researchers in addition to institutions is essential if they are to be more than a novelty, and their potential in helping with research management decisions is realised. But for this to happen, researchers will have to be sure they are not open to misuse and abuse.

Are altmetrics open to misuse and abuse?

If there is one thing that unites all metrics, then it is adherence to Charles Goodhart's law: 'When a measure becomes a target, it ceases to be a good measure'. No metric is immune to abuse and misuse, not even in academia, where the misuse of citation metrics and the Journal Impact Factor in particular is a long standing problem as people reach for things that can easily be counted (citations) as proxies for those that cannot (research performance). If altmetrics are to be applied within institutions then an increasing number of questions inevitably arise about how open they are to abuse, especially when traces that are being measured as so ephemeral and potentially easily created. As Adams points out with reference to criticisms that are levelled at REF: “we've got such a critical subject group that's involved here that it makes it quite difficult for other analyses to proceed without going through very stringent testing.”

For those in the field that I spoke to, the current problem is less one of deliberate abuse, and more one of potential misuse: researchers or managers using the metrics in the wrong way rather than deliberately creating false traces online.

As Williams pointed out, the need for education about the responsible use of metrics is increasingly recognised, and we are seeing its promotion within academia with DORA (San Francisco Declaration on Research Assessment), the Leiden Manifesto, and the recent Turning the Tide event in London hosted by the Forum for Responsible Research Metrics. For Williams the key message is that, 'it’s really important to look beyond the numbers, to understand what is driving the attention and why', and that’s a message that is increasingly being shared.

As Thelwall suggests, 'there will always be a risk that altmetrics will be over-interpreted. As with citations, there will probably be a mix of sensible and silly uses indefinitely - but I hope that the balance shifts towards sensible uses.'

Conclusions

In many ways it would seem the prevalence of altmetrics across the scholarly web is a false dawn, belying the circumspect and conservative nature of the academic community. There are undoubtedly uses for altmetrics today, as Williams noted, 'they can help scholars grow their international networks, significantly increase our understanding of how to get a piece of research into the hands of the people it has relevance to, and enable authors and funders to explore a different side of the story', but it would seem as though the community are also taking the results as Thelwall hopes, with 'a pinch of salt'

Undoubtedly more research is needed before we start to understand what altmetrics in all their varying forms mean, and before the academic community will fully embrace altmetrics. As Max Planck is often paraphrased as saying: 'Science advances one funeral at a time', and while this is often seen as being to the detriment of science, such conservatism has much to recommend it when it comes to altmetrics.