Making metrics more relevant

Share this on social media:

Topic tags: 

There are many questions about the relevance today of measuring the impact of research based on citations of journal articles. David Stuart examines whether emerging alternative metrics could provide some better insight

The web has spawned a host of new publishing formats and genres, and both researchers and publishers have had to adapt to the changing publishing environment. Researchers are not only expected to publish in traditional journals, but are increasingly expected to publish in a wide range of formal and informal outputs online, from humble tweets to massive data sets. At the same time, publishers are attempting to incorporate new web technologies into the traditional model to both enhance conventional journal articles and better engage with the research community.

These changes have undoubtedly created a far richer research environment, but they also raise new questions for researchers, publishers, and information professionals: How do researchers demonstrate the impact of their new publishing activities? How can publishers demonstrate the continued worth of traditional publications? How can information professionals filter the increased deluge of information? Increasingly, the answer to each of these questions is: altmetrics.

The term altmetrics was coined by a group of academics in response to a perceived need for better methods for measuring the impact of scholarship in a rapidly-changing environment. Rather than relying solely on traditional indicators of scholarly impact, they argued that the structured information found on burgeoning social networking sites could provide new indicators of impact.

Peer review is resource-intensive and only captures a limited number of opinions on the value of a piece of work, and citation-based metrics are limited to certain types of publication and can take years for the impact of a piece of work to be demonstrated. In contrast, altmetrics promises a battery of timely indicators from a wide range of online communities. Because many social network sites provide APIs (a simple way for a computer program to extract data from a service automatically) and third-party services gather and repackage this data for potential customers, a more nuanced picture of the impact of a wider range of scholarly work can quickly be discovered with a few clicks of the mouse or a few lines of code.

Altmetrics for journal articles

To date much of the attention has been on the potential of altmetrics to provide a more complete picture of the impact of traditional journal articles. Publishers such as IOP Publishing, Nature Publishing Group (NPG) and PLOS are now incorporating article views as well as altmetrics from a wide range of sources alongside the articles on their sites. Whilst IOP’s recently-launched altmetric offering centres around well established science-focused social media services such as CiteuLike, Mendeley, and Connotea, as well as citation counts from CrossRef, PLOS has also included the less science-focused social networks of Facebook and Twitter as well as the established citation providers of Scopus and the Web of Science. NPG’s recent announcement in this area means that Nature.com users can view an article’s citation data, page views, news mentions, blog posts and social shares including Facebook and Twitter.

As Graham McCann, head of product management and innovation at IOP Publishing stated, ‘authors are increasingly under pressure to prove the impact of their research and by aggregating these data we are helping them get the recognition they need. We are committed to providing researchers with the best publishing and reading experience and look forward to adding more metrics over the coming year.’

In many ways journal article altmetrics may be considered quite conservative. After all, altmetrics offer the potential to provide indicators of a far wider range of scholarly activities. It may not be the research article that is making the most impact, but the blog post where the findings are summarised, or the ideas that are shared through a Twitter account. Alternatively, the paper itself may have garnered little interest, whilst other outcomes from a research project, such as the computer code or datasets, may underpin other people’s work which is of far greater influence.

Some third-party metric providers, however, have moved beyond the journal article, and allow researchers to bring together metrics for a wide range of content. For example, as well as enabling the adding of journal articles, ImpactStory also allows researchers to add web page URLs and usernames for Slideshare (a slide sharing web site), Dryad (a repository for data in the biosciences), and GitHub (a host for software development projects).

The value of altmetrics

Despite the increasing accessibility of an ever-wider range of metrics, the question remains, however, as to what the raft of new metrics means, and the extent to which they are of use as more than mere curiosity. Despite it being over 50 years since Eugene Garfield popularised the application of citation analysis there continues to be disquiet amongst the research community when too much emphasis is placed on citation impact. Like the commonly-quoted Gresham’s law that ‘bad money drives out good’, there is a fear that when both good and bad citations (i.e. those that occur naturally and those created purely to inflate a researcher’s impact) are valued the same, the system is vulnerable to abuse. While there are fears that traditional citations are too easily abused, the potential abuse of altmetrics seems much greater.

Journal articles on IOP Science (left) and Nature.com (right) are just some of the papers now sporting alternative ways of measuring impact 

Unlike traditional citations, it is possible to inflate altmetrics without having to get peer or editorial approval, there is no one policing mentions on Twitter or bookmarks on CiteUlike. While it may seem unlikely that researchers would spend their time artificially inflating their own content, with the impact of much of the content being relatively low it is not beyond reason that some may attempt it. And such tasks could be easily outsourced too; services such as Amazon’s Mechanical Turk enable the creation of dozens of links and social network profiles to be created by users for little more than price of a cup of coffee.

Such problems are not insurmountable, however, and, whilst some researchers may try to game the system, new indicators will inevitably be developed that are increasingly hard to manipulate. In the same way that Google’s PageRank stopped weighting all links to a web page equally to improve its search engine ranking, so too are indicators likely to recognise that not all tweets or mentions on a particular social media site are equal.

This is already recognised with regards to tweets on ImpactStory, where the site uses data developed from Topsy.com to categorise Twitter accounts according to influence. It should also be remembered, before people are too quick to dismiss altmetrics, that whilst commercial organisations have a financial incentive to try to exploit search engine rankings with little chance of punishment, for researchers the rewards are less and the penalties are greater; after all, reputation is of prime importance.

Standardisation

Altmetrics is still very much in the early stages of its development, and much of the focus is currently on capturing the data that is available. However, in the same way that citation-based metrics have evolved, so too will altmetrics. Of particular importance is that altmetrics are developed in the open. This is not only so that weaknesses can be highlighted and improvements suggested, but also so that standards can emerge across sites and services.

Standardisation is particularly important if metrics are to be useful. To know that a paper has been bookmarked 20 times, or that a Twitter update has been retweeted 20 times has little meaning unless there are additional metrics for this to be compared against. As such, if publishers state that a paper has had a certain number of downloads it is important that they are clear about how such downloads are calculated, especially as there are conflicts of interests between those publishing the indicators and those who want to make use of the indicators.

For example, a researcher who wants to demonstrate the value of their research is likely to want to demonstrate that their work has had a significant impact and has been downloaded many times; it is of little interest whether the work has been downloaded multiple times by the same person, different people, or by the researcher themselves. At the same time those who want to judge the influence of the work may consider the number of unique IP addresses that have downloaded the work a better indicator of a work’s influence. The publisher will want to provide data that accurately reflects the relative impact of different works but they also need to demonstrate the value of their publications in comparison to that of other publishers.

Still a long way to go

There is a long way to go before altmetrics fully answer many of the questions that are being raised by the new research environment.

One question is whether altmetrics enable publishers to demonstrate the value of traditional publications. Undoubtedly tools are increasingly available that demonstrate the continued value of the traditional research publication as part of an increasingly-complex research environment that comprises both informal and formal types of publication. There is still a need, however, for metrics to be developed at the network level, for data to be collected from an increasing number of sources.

Another question is whether altmetrics have enabled researchers to demonstrate the impact of their new publishing activities. The answer is not yet, or rather, only to a limited extent. ImpactStory may allow researchers to investigate the impact of a richer research profile, but it barely scratches the surface of the resources a researcher may be making use of online. To a certain extent this is a limitation of the way information is currently published online. Research profiles will only truly reflect research practices when these activities are given semantic mark-up in a consistent manner and data can be collected automatically from across the web.

A third question is whether altmetrics have enabled information professionals to filter the increased deluge of information. The use of altmetrics to filter information may be limited so far, but as they become more established and the data more easily accessible it seems likely that a host of services will be built on top of the metrics, helping to push the most pertinent information to researchers.

In many ways, however, the fundamental question of what the different sorts of online impact actually mean, and how they relate to one another (e.g. what is the value of a mention on Twitter in comparison to a traditional citation) is still to be established. What has been recognised, however, is that the modern research landscape is much more than journal articles, and there is a lot of value to be created by measuring the connections between the new types of publication.

David Stuart is a research fellow at the Centre for e-Research, King’s College London, as well as an honorary research fellow in the Statistical Cybermetrics Research Group, University of Wolverhampton

Further information