Rising metrics need to go deeper

Share this on social media:

Topic tags: 

Interest in new metrics to demonstrate the impact of research has grown rapidly over recent years, and shows no sign of slowing down,writes David Stuart

For a long time measures of research impact have revolved around the citations that published works received, a practice that has often drawn criticism for its bluntness and misuse (especially when measures such as the Journal Impact Factor have been used for the evaluation of researchers).

However, the publishing of a wider range of research outputs on the web, and the growth in social networking sites, has created the possibility of a wide range of alternative or complementary metrics based on how the resources are used or referenced online, and a wide range of new tools and metrics are being developed.

These tools have gained widespread interest in many sectors, but speaking to Euan Adie (founder of Altmetric.com) and Mike Taylor (senior project manager for infometrics in Elsevier Research Metrics) the message is clear: there is still much work to be done in digging deeper into these metrics.

The rapid growth of social media metrics

There are two complementary areas of growth in publication metrics: the creation of standards for the usage of online resources; and capturing the referencing and conversations around online resources. The creation of standards and definitions for the usage of online resources through initiatives such as COUNTER (Counting Online Usage of NeTworked Electronic Resources) (www.projectcounter.org) play an essential role in ensuring that comparisons can be made between article level metrics for resources published by different publishers, but it has been the tapping into conversations that go on around research on social network sites such as Twitter that has gained most interest and has been the focus of a number of altmetric tools that were developed following the publication of the altmetrics manifesto in 2010 (http://altmetrics.org).

There are many examples. Plum Analytics (www.plumanalytics.com) captures a wide variety of metrics about a wide range of research outputs, from datasets and videos to journal articles and books. The metrics are categorised as usage (e.g. views, downloads), captures (bookmarks, favourites), mentions (reviews), social media (tweets), and citations (Scopus citation count).

Altmetric (www.altmetric.com) also collects data on the impact of journal articles, books, and datasets. Whereas ImpactStory (https://impactstory.org) enables users to create profiles based on their research outputs and shows the impact of those outputs.

The products have gained a lot of interest from large publishing and information companies: Plum Analytics was bought by EBSCO in January 2014, and is a wholly-owned subsidiary of EBSCO Information Services; and many publishers have begun publishing altmetric data on their sites. There is still much to be done, however, in turning the enthusiasm for metrics into robustness.

As Taylor sees it, alternative metrics are very pragmatic: ‘There are a whole bunch of things in there that are not the same things at all; we happen to package them together because they came out of things that happened at the same time (APIs, web 2.0, the open source movement), and we need to make some inroads into supporting people to do research in this area because at the moment we are just counting the things that are easy to count.’

Deepening our understanding

Both Adie and Taylor note that the big challenge within publication metrics at the moment revolves around deepening our understanding of the metrics.

Adie points to fact that people like to have a single number: ‘Even though we all understand that it can be a very simple ranking, there is something very appealing to it. We have to balance ease of use and letting people understand the most important parts of metrics, then make sure they can go further to the underlying data and help them understand what it really means.’

As Taylor notes: ‘Ten or 20 years ago there was a real appetite to dig deeply into what citation rates were looking at; there doesn’t seem to have been this appetite to drive deeper into the new metrics. It’s almost as if there’s too much data for researchers to get to grips with, and they’re forever looking at new different data, rather than looking more deeply into the data. We need to be working harder to mine data, and learning more from computer science.’

He continues: ‘From my point of view, I think there are perhaps three different flavours, if you like, of particular metric that you should consider when you are developing metrics: the raw numbers, the nearest-neighbours’ benchmarking, and the field weighting. We want to know what the person the other side of the fence is doing and how we are comparing with them, but we also want to be able to position ourselves in global terms.’

The future of metrics

Things seem to be changing, however, as the future seems to promise more robust metrics in a wider range of areas.

For Adie there are three primary areas of growth within publication metrics: ‘One is deepening our understanding of the indicators we already collect – typing citations, for example. It might not be automated so we have very specific typing, but we can automate to the extent we can tell whether it’s a positive or a negative.

‘The second is around different contributions. It’s very easy to assume that everything has to link to a publication, but there are all sorts of contributions that are connected to research, but are not connected to a research output; it’s going to become more important to be able to prove you are participating in the scholarly system in some way.

‘The third thing is around different outputs; what can be measured and what can’t be measured about these newly accepted outputs? Software and datasets are the two big ones, and books are the other place where you’d think we’d all have it sorted out by now, but actually there’s a scarcity of metrics around books.’

Taylor also considers data to be one of the most important areas of metrics over the coming months, although it is not without its own set of difficulties: ‘One of the first problems is we have this conversation about ‘data’, when there are probably four different kinds of data (streaming data, data articles, ancillary data, and uncurated data), and these different types of data will each have very different uses and very different metrics. They can have very different patterns of citation, and so forth. The problem is that by having a conversation about data in a very monolithic way, we run foul of the problem that the definitions are too loose to cover all the bases and they’re not tight enough to be useful.’

Conclusion

The year 2015 marks 60 years since Eugene Garfield first referred to the bibliometric ‘impact factor’ in print, and bibliometrics is obviously in robust health even if some of the metrics aren’t always as robust. Part of the interest in recent years has undoubtedly been associated with the fashionable nature of the social network sites on which conversations are occurring, but as both Adie and Taylor pointed out, we are moving away from things that are easy to count and towards things that are more difficult.

For Adie it is about answering questions: ‘Is your work effecting policy? Has your work reached a particular audience? These are all things where we have raw data and we can again infer reasonable things from that data, but it’s a bit more complicated than counting up all the links to a paper in different places.’

For Taylor it’s important to reflect on the data that is being collected, and make sure its ‘simple and easy to use so people can understand their personal positions, and that 3rd parties can make valid comparisons without the current nonsense that people sometimes come out with when they equate the number of tweets somebody gets with the amount of social impact they get.’

The recent publication of the Leiden Manifesto in Nature, and Elsevier’s own earlier metrics manifesto (http://bit.ly/hefceresponse), hopefully demonstrate a growing and wider recognition of some of the issues around the use and abuse of publication metrics.

These manifestos emphasise aspects such as the importance of openness, understanding the differences between fields, systemic effects, and the importance of combing the qualitative with the quantitative when considering metrics.

Hopefully, especially with the publication of the Leiden Manifesto, this message is reaching beyond the usual bibliometric audience. In the first week alone it achieved an Altmetric score of 696, was mentioned by 1,000 tweeters and appeared on 36 Facebook pages. But the take-away message from this article is that we have to dig much further into those numbers if we are to understand what they really mean.

About the author

David Stuart is a research fellow at the Centre for e-Research, King’s College, London