Current research assessment could miss the big picture

Share this on social media:

Topic tags: 

Traditional methods of research assessment could be failing those researchers who are fully embracing the possibilities of Web 2.0, argues David Stuart

Web 2.0 technologies have provided a host of new ways for researchers to publish, share, and discuss research in their field. The formal publication and discussion of research that has mainly taken place within journals, conference papers, and other traditional forms of publication can now be supplemented by a host of less formal publications. Blogs provide a forum for an ongoing discussion during the research process. Wikis allow numerous contributors to work on a document at the same time. Similarly, social bookmarking and microblogging services allow the instant highlighting of documents that a researcher deems to be worthy of comment.

These new means of communication potentially allow for the faster dissemination of ideas and feedback, not only among researchers within traditional research institutions, but also beyond the walls of academia where the massification of higher education has led to an increasingly highly educated workforce. If the use of such technologies is to be encouraged it is important that we have appropriate ways of measuring the impact of the new methods of publication. Attention has been described as the currency of academia, but at the moment many of the most innovative researchers, who are embracing the opportunities the new technologies offer, are being short-changed by the ways research is judged.

Estimating attention

In the traditional publishing model it is possible, at least theoretically, to estimate the attention that a researcher has received according to how many times their work has been cited in comparison to other researchers in the field. Citation analysis is based on the idea that, when researchers publish their research, they will cite those sources that have influenced their work. By counting up the citations we can determine the impact of the different contributions. As such, citation-based metrics have been used to inform decisions on hiring researchers, offering tenure, and allocating research funding.

In reality, however, estimating the attention that a work has received is not so simple. Citation indexes only include a limited number of publications and a researcher’s contribution to science cannot easily be reduced to a single metric. In addition, the publication process means that such metrics are necessarily slow and the focus on a single metric increases the chance that it will be open to abuse. That is not to say that citations cannot provide useful insights, merely that these insights are limited.

In contrast, Web 2.0 technologies provide the opportunity for research assessment to be based on a far larger corpus of documents, allow a far wider range of research activity to be captured more quickly, and are potentially more difficult to abuse.

It has always been the case that a significant proportion of publications have been excluded from citation indexes. Whole formats have either been ignored or vastly underrepresented, from monographs and conference proceedings to the grey literature that comes from think tanks, commercial research reports, and government organisations.

And although Google Scholar incorporates a far greater variety of documents than has traditionally been included in citation indexes, this has come at the price of lower reliability in the results. The automatic collection of data from across the web means that there will necessarily be mistakes in the indexing, and as such the traditional, primarily journal-focused, citation indexes continue to be considered the indexes of authority and are those predominantly used in research assessment. However, while a more inclusive citation index of traditional publications would be more useful, it would nevertheless fail to take into consideration the recent changes in the publishing process.

Lack of innovation incentive

Today researchers are encouraged to publish research in new and innovative ways, engaging in online conversations with other researchers as well as the public through blogs, videos, wikis, and the opening up of datasets. However, unless metrics are established that take into account these new technologies, and are incorporated into research assessment, more often than not the technologies will only be used half-heartedly, or not at all.

The half-hearted approach to new publishing opportunities is exemplified amongst the many research project blogs that litter the web. In theory, the blog is an ideal medium for documenting a research project, and has become an essential part of any research proposal. A project blog not only allows for methods and findings to be shared during the research process rather than only when the research is finished, but it also provides a forum for external parties to provide feedback and contribute to the research project.

In reality, however, many project blogs are merely going through the motions, created and updated with one or two posts so that the relevant box can be ticked. When, from the perspective of research assessment, there is no ostensible difference between a blog that is highly cited and has tens of thousands of hits a month and a blog that no one visits, there is little incentive for researchers to create a vibrant, worthwhile blog. Rather than pushing the boundaries of what the new technologies enable researchers to do, most researchers’ limited time is likely to be spent focusing on those aspects that can contribute to their research profile, i.e., the publication of articles in traditional peer-reviewed journals.

The potential for citation analysis to suffer from the information equivalent of Gresham’s law, ‘bad money drives out good’, has long been recognised; when citations that are included within articles according to the norms of science are given the same value as those that can simply be coined in an attempt to increase one’s own perceived value, the ‘good’ citations may be driven out. However, the continued emphasis on traditional citations in research assessment means that it is not only bad citations undervaluing good citations, but potentially traditional publishing driving out the most suitable form of publishing.

Call for change

The changing publishing environment has led to the call for alternative metrics (altmetrics.org/manifesto), which better reflect today’s research practices and take advantage of the use of current social media technologies. For such metrics to have an impact, however, they need to be widely accepted by the research community and be seen to reflect real engagement rather than the buzz of newsworthy, rather than academic-worthy, research.

‘The response among academics in general is broadly speaking non-existent. Researchers as a group are not particularly interested in measurement of research and are often dismissive of new measures of any type,’ observed Cameron Neylon, one of the researchers behind the call for alt-metrics. In addition, he said that ‘broadly speaking, researchers are fairly contemptuous of anything that happens on the web that includes comments from the wider public.’

Not all researchers are indifferent to the idea though: ‘Among the small subgroup of researchers and others who are interested in measurement there has been a very positive response. The term itself is now being widely used in that subset of the research community that is online and interacting with the wider world and it is getting wider attention from funders, service providers, and other interested parties,’ continued Neylon.

Gaining acceptance

Nonetheless, he believes that two things need to happen before any measurement will be accepted by the wider research community. Firstly, he said that measure has to be seen to matter, which means it needs to be accepted as evidence by funders, or promotion and employment panels. Secondly, it has to be seen to measure something that has value to the community.

‘For the first, there is some interest and presenting alt-metrics as evidence of wider impacts is a plausible route forward as this can provide quantifiable evidence of changing views or awareness,’ said Neylon. ‘For the second, I think the first places that such measures will be accepted is in the same kind of areas, engagement, education, demonstrating public and social impact. Further down the track we will start to see these kinds of measures more widely used as evidence of research impact, of re-use and interest in papers and traditional outputs, but that is likely to take longer.’

Challenges of the changing web

There is no doubt that there is a need for improved metrics that reflect the increasingly complex information landscape of today’s researchers; encouraging researchers to take full advantages of the technologies that are available. However, establishing these metrics will not be easy. Social media sites go in and out of fashion, leaving little time for any one particular metric to gain widespread acceptance. The impact of a researcher’s work on Twitter may seem relevant today, in two years time it could be deemed as relevant as a researcher’s impact on MySpace.

It is also possible that the increased interest in alternative metrics will see the emergence of the academic equivalent of link-farms, with organisations offering services to help increase a researcher’s online impact in the same way that search engine optimisers offer to increase a website’s ranking on Google. However, such problems are not insurmountable: metrics can be established that transcend any single site, and algorithms developed to help identify those who are trying to artificially inflate their impact.

In the Web 2.0 world there are a host of different metrics that may be brought forward to represent a person’s online impact, and there is a lot of research to be carried out before we establish which are the most appropriate.

David Stuart is a research associate at the Centre for e-Research, King’s College London, as well as an honorary research fellow in the Statistical Cybermetrics Research Group, University of Wolverhampton