Thanks for visiting Research Information.

You're trying to access an editorial feature that is only available to logged in, registered users of Research Information. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Guidelines needed to prevent impact-factor abuse

Share this on social media:

Citations play a big part in assessing a journal's quality but what happens when many of those citations come from papers authored by that journal's editorial board? Paul Peters considers the need to establish guidelines for appropriate citation practices

Although there are numerous style guides that provide clear guidelines for how to format the references in a scholarly article, there are no equivalent guidelines for determining what should be included in the list of references in the first place. Given how important citation metrics have become in determining the success of academic researchers as well as scholarly journals, it is concerning to realise how little has been done in order to enforce, or even to define, appropriate citation practices.

In order for citation metrics, whether they be journal-level metrics, author-level metrics, or article-level metrics, to be useful as an indicator of scholarly impact or quality, there must be a common understanding of what it means for one article to cite another. As simple as this may sound, there are in fact quite a number of reasons why one article may cite another, and some of these cases can become rather problematic when one considers the effect that they have on citation-based metrics.

Earlier this year we became aware of an unfortunate case related to a journal that we had recently acquired, TheScientificWorldJOURNAL, in which two articles had been published with an excessive number of citations to another journal, Cell Transplantation. After we became aware of these two articles we conducted an investigation and found that the authors of these articles as well as the Editorial Board Member who had recommended their publication were editors of Cell Transplantation. We subsequently retracted both articles for violating our policy against citation manipulation. During our investigation into these two cases, we were surprised to learn how little has been done, either by the publishing industry or by research funders and university administrators, to define appropriate citation practices.

Over the following weeks we decided to investigate the extent of this problem by trying to identify articles that have a high concentration of citations to a single journal during the previous two years, which is the time window used in calculating the Impact Factor. Although our analysis was limited to articles from 2010 and 2011 that were indexed in the Web of Science, we were able to identify dozens of articles with more than 100 citations to articles published in a single journal during the previous two years. In some of the more extreme cases, a handful of articles written by the journal’s editors had accounted for a significant fraction of the journal’s total Impact Factor, and several of these cases were in journals that are among the most highly-ranked titles in their field.

Most of the articles that we identified could fit into the following three categories: a review article or editorial written by the journal’s editors which reviews the previous volumes of the journal; a bibliometric study based entirely on articles published in the journal during the past two years; and a review of a conference whose proceedings were published in the journal during the previous year.

We were surprised to see that in most cases the authors of these articles, who were typically members of the journal’s Editorial Board as well, made no attempt to conceal the large number of citations to their journal, which would suggest that they didn’t consider these citations to be inappropriate. In fact, if one were to look at these articles without considering the distorting effect that they have on citation-based metrics, they would appear to be completely harmless, and may even be of some interest to the journal’s readers.

Regardless of whether or not it was the intention of these authors to artificially increase the Impact Factor of the journal being cited, the end result is that these kinds of articles have a significantly distorting effect on the ranking of journals within a subject category. Since many researchers are under increasing pressure to publish in journals with the highest possible Impact Factor, there is likely to be an increase in the publication of these kinds of manuscripts, which will pose a real challenge to the Impact Factor as well as other citation-based metrics.

While it may seem that this problem only affects journal-level metrics, like the Impact Factor, it is very possible that the problem will become much worse if we move towards author-level metrics, like the h-index or g-index. Nearly all of the articles that we were able to identify as having an excessive number of citations to a given journal were written by the editors of the journal being cited, which makes sense given that these editors have an interest in increasing the Impact Factor of their journal. If there is a shift away from journal-level metrics in the future towards author-level metrics, nearly every scientific researcher will have an incentive to increase the number of citations to their own work. As a result, I would expect that the amount of citation manipulation that would take place would be orders of magnitude higher than what we see in the current system. Moreover, this kind of citation manipulation would be very difficult to detect, since there are many perfectly-valid reasons why an author would frequently cite their own work or the work of their close colleagues. This would make it very difficult to determine when inappropriate citation manipulation has taken place.

I do not believe that there is going to be an easy solution to the problem of citation manipulation overall, although there are relatively straightforward ways to identify the most extreme cases. Nevertheless, given how important citations are within the scholarly record, I think that publishers, research funders, and university administrators need to start working to find ways of addressing this problem.

Paul Peters is chief strategy officer at Hindawi Publishing. Readers may contact him at paul.peters@hindawi.com for examples of the cases described in this article