Publication metrics in a changing landscape

Share this on social media:

Topic tags: 

David Stuart discusses the opportunities and limitations of various ways of measuring the impact of scientific results and discussion

Scientific discourse is in a process of rapid transition. Journal articles have not only been made available electronically, but are often available in different versions. Pre-prints in institutional repositories are indexed alongside final versions on journal websites and preliminary versions presented at conferences. The narrative form has been joined by the publishing of datasets and computer code in an attempt to encourage transparency and reuse, while the discussions around research take place on multiple online platforms as well as within traditional journals. This increasingly complex landscape requires increasingly complex metrics, both to aid with the filtering of content and to ensure that researchers receive the credit that they deserve. This offers both challenges and opportunities to the information professional.

A wealth of metrics
Scientometrics, the quantitative measurement and analysis of science, has traditionally made use of the citations from one article to another to investigate the impact of research. It has been used to measure the impact of both individual articles and aggregations of articles, for example, all those articles published by an individual, a journal, a research group, or even a country.
This is not because citations are the only sort of impact that matters, or necessarily even the most important type of impact, but rather because it is one of the few measurements that have been available. With the transfer of scientific discourse to the web, however, there are now a seemingly endless variety of metrics that are available for measuring the impact of research. The problem becomes not one of finding data, but rather one of selecting the best data and using it in the most appropriate manner.

Publication metrics that have gained the most interest are those from publishers’ websites and large social network sites. Journals and repositories are collecting and sharing a wide range of article-level metrics that provide insights into how users are accessing the research that is published. Views and downloads can now supplement citation data, and COUNTER, an international initiative to establish standards for online usage statistics, has proposed usage factors to complement the traditional journal impact factor.

In addition, the sharing and mentioning of scientific articles on social network sites such as Twitter and Facebook forms the basis of altmetrics, alternative metrics making use of the large quantities of structured data that these sites provide to demonstrate impact beyond the traditional confines of scientific discourse. Publishers are increasingly publishing altmetrics alongside usage figures, and a number of tools now exist for measuring the impact of work across multiple journals and multiple sites: Altmetric, ImpactStory, and Plum Analytics.
There are also new research outcomes that offer the potential of new publication metrics. Datasets are increasingly packaged as distinct research outcomes. They are not only being deposited in data archives but also being associated with new data-centric publications such as Scientific Data from the Nature Publishing Group and Wiley’s Geoscience Data Journal. The computer code that has been used to collect and model this data is also increasingly made publicly available for reuse and development.

These new types of publication require new metrics. It is important to know not only how many times has a data publication been cited but how integrated the dataset is into the semantic web. Similarly, it’s not just about how many times has the computer code been reused, but how many times has it been independently developed and these new versions used.
The potential wealth of metrics still requires a lot of work, through standardisation in the way that data is collected and reported from individual sites, and in the way the data is aggregated from multiple sites. For example, how do we combine the impact of multiple versions of the same document and how do we aggregate the impact of a single version on multiple social networks?

These metrics offer the potential for increased scientometric services within information services. However, these services may not always have a human face. Information professionals may have an increased role in helping researchers to demonstrate the impact of their work but the filtering and pushing of content to users is likely to become increasingly automated.

Filtering and credit
Traditionally, citation analysis has primarily been used for evaluating the impact of research rather than as a tool for the filtering of articles. Although a library may have used journal impact factors to identify the key journals within a field, it would nonetheless be expected that researchers carry out a comprehensive search of the literature related to their research. But, as the amount of content created continues to gather pace, automatic filtering becomes increasingly important if researchers are to keep at least a passing understanding of some of the important issues in their field rather than the increasingly small part that they are investigating.

The role of such filtering has barely begun; it seems inevitable that it will eventually move beyond specialised services such as Altmetric and be incorporated in more open and accessible services such as Google Scholar. These products are likely to require active rather than passive engagement if the subtleties in information behaviour and practices between the different fields are going to feed into the filtering process. But if information is increasingly filtered and pushed to researchers, there will inevitably be less of a role for the information professional.
Publication metrics will nonetheless continue to have an important role in the assignment of credit, albeit of a more nuanced type than before. New metrics can potentially demonstrate impact in communities that would not have been represented in traditional scientific discourse (for example, the public) and demonstrate the value of products that would not have been captured previously (such as data collections). If filtering offers a challenge to the traditional role of the library and information professional, then the issue of credit provides a more obvious opportunity. As demonstrating impact becomes more important, and the landscape of publication metrics more complex, the traditional bibliometric competences will undoubtedly become more important.

A cautionary future
There seems to be an inevitability to publication metrics having an increasingly important role within information services, both for filtering and the attribution of credit. Nonetheless it is important that caution is taken when applying the metrics. It will always be important to look beyond the filters, and credit can never be reduced to quantitative indicators alone. Part of this is attributable to what has been termed Goodhart’s Law: ‘When a measure becomes a target, it ceases to be a good measure.’

When publication metrics such as the number of Twitter mentions or downloads become part of the way people find research, or how credit is assigned, then there is an inevitability that some people will try to abuse the system and achieve greater impact than the research deserves.
Part of the problem in identifying potential abuse of the system is the cultural norms of what is and isn’t acceptable scientific practice, and this is changing along with the publishing landscape. Whereas once academic search engine optimisation may have seemed inconceivable, maximising the impact and visibility of research is now a recognised part of the publishing process, and a service such as Kudos appeals partly through the promise of levelling the playing field. There will always be practices that are frowned upon by the scientific establishment, but as the scientific community sees itself more and more as a marketplace, practices that are deemed disreputable will be become fewer and fewer.

Even if the publication metrics are not manipulated, metrics can nonetheless only ever tell part of the story about the value of research to different users. The idea of reducing impact to a single metric, whether this is an H-Index or journal impact factor, is understandably appealing as it allows for the simplistic comparison of similar aggregations. Nevertheless it is also fundamentally flawed as people and their work are multi-faceted and so is their impact. That a new battery of metrics is now available is undoubtedly an improvement on existing limited measures of impact, but any metrics are nevertheless a simplification of actual impact.

All publication metrics are limited, but it’s through understanding their limitations that some of them can be useful. The wide variety of publication metrics that are now available undoubtedly have a role to play in the future of information services. However, it is important that their limitations are recognised if they are not to be abused in the way that citation metrics have been. This is a role for the library and information professional. Many will be sceptical about the value of the new metrics, both for the filtering of content and the assigning of credit, and such scepticism is essential in making sure the most appropriate metrics are identified and that they are only used where applicable