A place for citation data

Citation_Small

Any discussion about measuring the impact of scholarly research, especially on Twitter, quickly becomes heated. A picture is often painted of Thomson Reuters’ Journal Impact Factor (JIF) on one side, measuring impact of journals based on citations in a way that it has done for many years. On the other side, the picture is of a collection of trendy, alternative metrics that look at articles and consider things like social media engagement and reporting in the mainstream press.

Presentations and coffee-break discussions at the ALPSP conference, held near London in September, highlighted some of the causes of tension. Some open-access publishers have told Research Information of the frustrations of getting JIFs for new, open-access titles and the feeling that new subscription titles from long-established publishers have an easier process.
Meanwhile, in research assessment and funding decisions, the JIF is often used as a target for researchers. This is particularly true in China and other parts of East Asia, as Joyce Li, associate publisher at BioMed Central described in discussions following her ALPSP presentation on open access in China.

But what about the thoughts of those who produce the metrics? Discussions with Patricia Brennan, VP of research analytics at Thomson Reuters, and Euan Adie, founder of Altmetric, suggest that the picture is less polarised.

As Adie pointed out: ‘Altmetrics is terrible name. It implies a replacement.’ In his view it is not a question of a choice between looking at citations and looking at social media and news mentions. ‘Citations are always going to be a good indicator of scientific activity. They are a good dataset. In the same way, the JIF is a good indicator too. You just can’t use it exclusively,’ he said. ‘I don’t have problem with IF as a broad indicator of quality of journal. The problem comes when it is used as a target.’

And Thomson Reuters is not just about the Journal Impact Factor, according to Brennan.
‘Impact factors, particularly the JIF, which has been around for a very long time, are widely used and appropriate for measuring journals,’ she said, before adding: ‘what we’re seeing is that there’s a need for an array of metrics and ways to measure the array of activity. When you are measuring an article it’s appropriate to look at article-level metrics. When you are looking at people then it’s appropriate to look at those people in the context of their fields.’

Article-level metrics is something that she said Thomson Reuters is already doing. ‘The Journal Citation Report (JCR) is known for the IF but in fact there’s a whole range of metrics,’ she explained. ‘Metrics and measures are no longer “one size fits all”.’

Recently, Thomson Reuters added detailed information from its latest JCR into InCites, the company’s platform for analysing scholarly research fields and areas of expertise. ‘The InCites platform is really a way for anyone in scholarly pubIishing to go beyond the information of looking at a journal and looking at article-level indicators.’

She noted that some of the use cases of more granular metrics are in benchmarking institutions, countries or regions: ‘A question might be asked like which of the G20 countries produced the most research in clinical medicine in the last five years or which Latin American country collaborated most with China last year. The questions are endless.’

She added that users can link from InCites to tables of content and, if the articles are in open-access journals, they can also link to articles. In addition, there are links to news mentions from the open web. ‘We are making this very rich array of data much more transparent and usable for an institutional view,’ she said.

Getting the user view

Of course, the companies that calculate metrics are only part of the story. How they are used and what happens next is up to the researchers, funders, university administrators and others.
Euan Adie noted that there is an increased awareness of altmetrics amongst users, probably due to the increased prevalence of these metrics on articles and the increased maturity of the tools.
With these issues in mind, Altmetric is running a conference on altmetrics in late September –just as this issue of Research Information goes to press. Part of the aim, according to Adie, is to find out what people are doing with altmetrics and where they see limitations. ‘We need people to say “this is what we do” or “this won’t work because you’ve forgotten x”. It’s about exposing people to what’s possible.

‘There is far more research into altmetrics now; all the big bibliometrics research centres have groups studying this,’ he observed. ‘The first papers tended to link to citations and look for correlation. There is a weak correlation but I don’t think it’s a real one. Recently, research has shifted more into case studies following researchers, asking questions like “why do you do this?” or “Why tweet that?”’

He noted that some standards are emerging for altmetrics, especially in the USA where NISO is driving this. He also referred to the work by a group of UK universities to agree standards on benchmarking so that everybody’s numbers add up when they come to research assessment; this year, said Adie, was the first year that this ‘recipe book’ included standards for reporting altmetrics.

‘One of our concerns as a community is that standards have been driven by us, not by users,’ he commented.

Such research and standards are important because any measure has strengths and weaknesses. One of the big concerns with traditional impact factors – and with altmetrics – is the possibility to ‘game’ them, to artificially inflate the numbers for your own journal or paper by excessive citation or linking to your own work.

Brennan observed that, in itself, self citation is not a negative. ‘It’s natural and good that one would cite one’s prior work. The whole research process is about building on prior work. Where it becomes problematic is where it becomes inordinate self citation.’

She continued: ‘unfortunately in any system there will be some tendencies to not play fairly in that system. At Thomson Reuters, we take the approach of education and use the data to show where we see this happening.’ However, the risks may not be as severe as sometimes anticipated: ‘When we started InCites we’d publish metrics adjusted for self citation. We didn’t see much difference,’ she explained.

The complex picture of metrics and raising engagement ties in with another danger: the rise of bogus metrics, an issue that US librarian and open-access watcher Jeffrey Beall has begun to highlight lately in his Scholarly Open Access blog. He points to dubious journals that quote numbers that are designed to look like Thomson Reuters’ JIF but are not.

‘It’s troubling of course especially for participants who are victims of rogue journals,’ commented Brennan. ‘It leads back to the importance of education and trying to list journals. We publish the list of journals we include on our website for everyone to see. The trend is more to do with the explosion in growth of science output.’

Understanding the meaning and appropriateness of the figures is a challenge for any metrics. With altmetrics, it poses an extra challenge because of the range of things measured and the changing nature of engagement with research publications.

‘We do have this single score in Altmetric. The reason we do it is because people want it and it is interesting but the downside is the same as with IFs. We try to put scores in context, based on time etc. We need to expose more of that data,’ said Adie.  

‘We know that some alternative metrics are more scholarly, some are more policy, some more public and that these are very different types of impact. The accepted wisdom – much of which came out of work in Elsevier Research Labs – is now to have different buckets. At Altmetric, we use public engagement, researcher use, policy and practice but you can break it down further.’

Another challenge is that altmetrics is a young field and that the things measured are often also young. The growth of Twitter, for example, has been so recent and so big that the year of publication can make a massive difference to the level of Twitter engagement in a paper.

‘Maybe articles a few years ago had a lower altmetric score just because Twitter wasn’t around then,’ Adie noted. ‘There are a lot of challenges. We have to really understand the data.’
Brennan, at the end of the day, sees assessment based on citations as still the best bet today.

‘As scholarly publishing evolves, it doesn’t happen by Twitter or blogs. It is important to understand what happens outside formal communication channels but in terms of understanding what impact these have l think citation counts is the most effective way to assess to date.

‘We have large stores of well-curated publication data and the richness of the data is fantastic.You can look at collaboration and look into the dataset and really understand what’s happening across the globe, across time.’ 

But that may not be the situation forever. As Adie observed, ‘several big portfolios like Wiley and Springer have some type of altmetrics.’ He also noted the recent agreements between Altmetric and platform providers Silverchair and HighWire. ‘We like that kind of thing; it shows that altmetrics is becoming part of the infrastructure,’ he explained. ‘It’s moving from something you do because it is novel to something you do because it’s expected.’

Back to top