The rise and rise of altmetrics

Share this on social media:

Alternative metrics have moved the measure of impact from mere citation to mentions on social media, news outlets and more, writes Rebecca Pool. So where next for metrics?

Often described as tomorrow’s filters for today, altmetrics emerged at the turn of this decade to make sense of scholarly impact in a burgeoning research ecosystem.

Several years and many tools later, alternative metrics have broadened the measure of importance from citation to number of tweets, paper downloads, mentions in policy documents and more.

But whether you love or loathe these yardsticks of scholarly prowess, altmetrics are rising in stature. Indeed, Kathy Christian, chief executive of UK-based Altmetric, is certain altmetrics awareness in institutions is increasing.

Right now, Altmetric tracks mentions of scholarly works online, including social media sites, bookmarking services, news outlets, policy documents and others. As Christian asserts: ‘We definitely see an increase in awareness; our industry has had a big push on educating the market, and we continue to do that.’

Tool-wise, research organisations are widely adopting Altmetric badges; colour-coded donuts that give website visitors immediate access to a real-time record of online attention for each piece of research. At the same time, the company’s Explorer platform is proving popular for users wishing to monitor the online activity surrounding academic research and scholarly content on a larger scale.

‘Many librarians are actually adding our badges to their institutional repositories, which provides a view of which research people are engaging with,’ highlights Christian. ‘Librarians use these tools in addition to standard citation reports to add in a bit more context and provide an early indicator of what’s happening in that area of research.’

Altmetric also offers an API, which provides programmatic access to the data on articles and datasets, so users can provide this information in their own apps. Use of this has not been as widespread as off-the-shelf tools, but as Christian says: ‘The API enables users to really integrate our data with their internal systems but as you can imagine, this relies heavily on someone having a more technical background, which may not always be there.’

James Hardcastle, head of academic business development at Colwiz, Taylor & Francis, sees a similar picture. Taylor & Francis uses traditional and alternative journal and article metrics, including Impact Factor, Citescore, H-Index and altmetric attention scores.

According to Hardcastle, the Impact Factor of a journal is still the most demanded metric at Taylor & Francis but for individual articles, the publisher provides a mix of downloads, citation counts from Web of Science, CrossRef and Scopus, and altmetric attention scores. ‘For us, the provision of metrics has always been customer-led,’ he says. ‘We have followed what [the readers] want, and by using downloads, citation counts and attention scores, they can easily assess if an article has had attention and then check if it is of interest to them.

'Recently, the company also acquired University of Oxford spin-out, Colwiz, to support ‘the discoverability of content’. Colwiz has developed wizdom.ai, a ‘research knowledge graph’ that uses big data, machine learning and artificial intelligence to generate analytics on research developments. The tool can be used to quickly assess research performance, and crucially, can provide article-level data that can then be applied to a journal. ‘Traditional metrics provide you with an indication of the global citedness of a journal but they don’t tell you anything about journal use within your institution,’ he says. ‘Librarians want to know, which journals are my researchers citing, authoring in and also being cited in.’

The head of academic business development reckons tools such as wizdom.ai and Digital Science’s Dimensions, plug this gap and will help to provide institution-related information, rather than information on a journal.

‘Five to 10 years ago, data just wasn’t available in the way that it is now, so creating these tools that map funding to institutions to publications and to authors would have been impossible,’ he says.

‘Now the data and computing power is available,’ he adds. ‘And given today’s tighter research budgets, these tools are important as they provide intelligence on what research is being used in an institution and what content is valuable to the researchers.’

Use, don’t abuse

But as the rise of myriad metrics continues, many in the industry are wary about how the data can be interpreted. Emmanuel Thiveaud, head of research analytics at Clarivate Analytics, is keen to highlight that metrics should be used wisely.

‘New metrics face the same challenges as established metrics; no one view can capture every part of impact and no one answer suits every question,’ he asserts. ‘[Users should] use a variety of indicators, carefully chosen to best answer specific questions being asked.’

Thiveaud also highlights that the value of metrics begins in the quality of data on which they are based, which should be accurate and ‘necessary’ to the user. And, he also believes that as the use of altmetrics continues to rise, and these metrics are used with traditional metrics, a stable and inclusive set of data sources must be established.

‘Shares through established scholarly network sites are often an early step towards later citation so also hold interest as indicator of scholarly use,’ he says. ‘However, the most mature altmetrics now are the ones whose meaning is most clearly associated with an outcome, such as absorption of a work into public policy documents or promoted in mainstream news outlets.’

Like Thiveaud, Andy Tattersall, information specialist at the Information Resources Group, University of Sheffield, is very aware of the importance of quality data sources for altmetrics. As he puts it: ‘Altmetrics are only as good as the data and unfortunately right now, this can be patchy.’

According to Tattersall, a lot of content does not yet have altmetric data as it is either too niche or too dated to be communicated now, or it simply doesn’t have a trackable code when communciated.

‘It is still very early days for altmetrics and the providers’ platforms are still stabilising,’ he says. ‘However, this will change as the companies supplying the altmetrics are very, very keen to harvest as much data as possible.’

Importantly, Tattersall advises that altmetrics are taken ‘on face value’ right now, with users being critical of the data. ‘A piece of research might have an altmetrics score of 500, and has been cited by, say, 20 policy documents,’ he says. ‘But if you drill into the data, you might find it’s actually used in the executive summary of a policy document and has been released in different language versions of the same document.’

Likewise, for Hardcastle, issues around the understanding of altmetrics also exist. He believes that altmetric users need reminding that this flavour of metric does not equal social media. ‘Twitter as an altmetric is the largest single source of data because there is an awful lot of tweets out there, but the information of interest tends to lie within news coverage and policy documents,’ he asserts.

And like many, he is also keen to point out that the altmetric attention score doesn’t provide any indication of whether the attention is positive or negative, or indeed, reflect quality research.

‘The move to rebrand the Altmetric score as the Altmetric attention score is positive,’ he adds. ‘These metrics in general signify that an article might be worth looking at in more depth, but they are not an indication of impact nor an indication of quality.’

So how to tackle the mis-use problem? Clearly communication is key, and for Tattersall, this needs to start at home. ‘Communications has to work at a grass roots level and come from the central libraries,’ he asserts. ‘Researchers no longer work in ivory towers and their research is going to be communicated intensively; altmetrics allows them to be part of that conversation.’

Altmetric’s Christian concurs. As she points out, many libraries already have educational programmes for their researchers to explain the scope about the science behind their attention scores and how you use altmetrics tools.

‘Yes, you can have a very high-scoring paper with lots of negative attention, and people do love a car crash, but we have always said that you need to take a closer look at individual pieces of information,’ she says. ‘We’ve spent a lot of time ensuring we’re collecting different research outputs and finding different attention sources. But people do need more education and we are really helping with that.’

Pleasingly, even more help is already at hand. Late last year, the National Information Standards Organisation (NISO) published recommended practice on altmetrics following its three-year Altmetrics Initiative to standardise this field. Its report provides guidance on using altmetrics as well as information on how altmetrics suppliers generate data.

Then Altmetric chief executive, Euan Adie, described the results as ‘sensible and useful’ while head of metrics development at Digital Science, Mike Taylor, highlighted how the project drew industry experts together for structured conversations on issues.

Christian believes that the outputs have been very practical. ‘People have seen how the Impact Factor, H-index and citation have at times been used improperly, so this is the community really trying to get ahead of that,’ she says. ‘We want to make sure that altmetrics continue to be a useful tool, and is not used in the wrong ways.’

Hardcastle also believes that initiatives such as NISO’s are important to standardise definitions and terminology within the altmetrics space, enabling users to compare data between different altmetrics providers.

But although Thiveaud is also certain that this early move by NISO to frame altmetrics standards is important, he cautions: ‘Because the field is so new and the platforms are in a rapid state of change; it seems we will need a constant re-examination of the standards.’

Brave new altmetrics

Still, as the field of metrics rapidly evolves, altmetrics providers are continually assessing the value of its attention sources. For example, Plum Analytics – now Elsevier-owned – tracks mentions, downloads and clicks on myriad research outputs including pre-prints and raw data-sets, while Impactstory delves into countless Wikipedia pages, blog posts and reviews on Faculty of 1000.

At the same time, Altmetric’s Christian is certain that policy documents will continue to provide valuable information as will clinical data resources. What’s more, she and colleagues are also looking at patents as an additional altmetrics source. ‘It’s all about working out what is important to users from different disciplines, finding the appropriate indicator of potential impact and seeing if we can then track that,’ she says.

According to the chief executive, Altmetric is keen to provide more tools for researchers from arts and humanities disciplines, so has been developing its ability to track more books and to capture more mentions of individual titles and chapters. As early as 2014, the company developed the Bookmetrix platform, providing metrics data for Springer Nature Books. And now, Taylor & Francis, Michigan Publishing, MIT press as well as Springer Nature, display Altmetric badges book pages, illustrating online attention.

Altmetric also recently started tracking references and links to academic books records on Amazon.com. As Christian puts it: ‘We’re now trying to access and incorporate book  reviews to our altmetrics; we really want a tool that provides the arts, humanities and social sciences with more data on their research.’

At the same time, in clinical sectors, Christian and colleagues have also noted that conference proceedings are rising in importance, especially in the pharmaceutical industry where conferences can provide a first glimpse of new research before publication. ‘Different research disciplines have different outcomes, so by identifying the metrics that represent this, we can help researchers to demonstrate the reach and influence of their work,’ she says.

As other data sources for altmetrics surface, what is also clear is that following the initial wave of excitement, altmetrics sit alongside traditional metrics and are not destined to compete.

Christian is quick to describe altmetrics as ‘a definite complement’ to metrics, adding: ‘We could even drop this traditional metrics versus altmetrics idea and simply use metrics and data.’

Meanwhile Hardcastle asserts: ‘Research and its outputs are increasingly diverse... and we’re going to see a demand for both traditional metrics and altmetrics in the foreseeable future.’

But for information specialist, Tattersall, the real excitement may be yet to come. ‘Tweeting research has been criticised but if researchers are tweeting about a dataset that nobody had realised existed then that has to have benefits,’ he says. ‘Peer review remains the gold standard for assessing the quality of research, but altmetrics really allows people to see the conversations that are going on and over time researchers will pay more attention to it.’

‘Some may see it as a Pandora’s Box, but it’s one that isn’t going away,’ he adds.