Measuring in context

Share this on social media:

Topic tags: 

As scholarly publishing communities search for the meaning behind metrics, the need for context is becoming crystal clear, reports Rebecca Pool

At a time when much of the world was waking up to Coronavirus, and China was through its worst, Chinese science and education ministries released guidelines regulating the use of the Science Citation Index (SCI) in research institutions. In short, Chinese institutions were told to stop evaluating, promoting or recruiting researchers based on their numbers of published papers and citations. And at the same time, payments for publishing in journals were to end.

In China, a key indicator to evaluate a researcher, allocate funding and also rank an institution, has been the metrics collected by Clarivate Analytics' SCI on around 9,000 journals. This practise led to China becoming second in the world – only to the US – for publishing research papers in international journals but also raised concerns that some researchers were prioritising research quantity over quality.

Indeed, the metrics-focused approach is considered to have led to some researchers submitting plagiarised papers, excessively citing their own articles and even hijacking peer review processes and reviewing their research. As the Chinese ministries' statement lay out: ‘It is inappropriate for higher education institutions to set paper publication requirements... a sound assessment system should be developed, in which different weight of paper publication is put on the evaluation of different types of scientific research work.’

Clearly, the latest move from China will have a profound effect on the the nation's academics. The new requirements could well to lead to a decrease in international publications with some universities falling in global higher education rankings.

But the latest move will also help the government to realise its desire to have the nation develop its own academic standards while stepping away from the over-use of single-point metrics. And importantly for many, it aligns well with global movements, such as the San Francisco Declaration on Research Assessment (DORA) and Leiden Manifesto, that aim to move away from single-point metrics to broader measures of research performance.

Indeed, for Martin Szomszor, director of the Institute for Scientific Information and head of research analytics at Clarivate, the latest move from China represents success. ‘This is a clear move away from using single-point metrics to evaluate institutions and people,’ he says.

‘We've been engaged with various bodies in China over the last couple of years and have watched them evolve their thinking very rapidly towards something that is more in line with Europe and North American research evaluation.’

Early last year, Szomszor and colleagues from ISI released the report 'Profiles, not metrics' that highlighted the critical detail that is lost when data on researchers and institutions are distilled into a simplified of single-point metric or league table. The report set out alternatives to academia's well-used Journal Impact Factor, h-index and average citation. For example, it illustrated how an impact profile, which shows the real spread of citations, could be used to demonstrate an institution's performance instead of an isolated Average Citation Impact.

‘[The report] has become a really useful tool, particularly around the customer-facing part of the business,’ says Szomszor. ‘In the last few years, the search for other types of metrics and indicators has been growing steadily... and what is happening in China now is very positive.’

Daniel Hook, chief executive of Digital Science, has also been eyeing China's move away from a single-point metrics-focused evaluation system with interest. ‘We are seeing unsettled times for metrics in China,’ he says. ‘The government has effectively [asked] each institution to locally define the metrics that are important to it, and that it would like to work on, and so create a new norm for China from the ground up.’

Like many, Hook is not a fan of single-point metrics and ranking. His company invested in non-traditional bibliometric company, Altmetric, as early as 2012, and introduced its Dimensions database in 2017. The research database links many types of data including Altmetric data, awarded grants, patents, and more recently datasets, with a view to moving research evaluation practices beyond basic indicators. Digital Science also joined DORA in 2018.

‘I have given public talks where I've said that the h-index should be abominable to an academic – it's the reduction of your life's work into a single number,’ says Hook. ‘And if you agree that the h-index is bad, then you should view ranking as cataclysmic as it's not just reducing your work to a single number, it's reducing everybody's work in your institution to a single number in one go.’

Instead, Hook believes that the time has come to move to 'more subtle metrics' as well as metrics that draw on a greater diversity of data. And this of course, is echoed in China's move away from citations-based incentives that have inevitably led to questionable research practices.
‘For me, all of this is about context,’ says Hook. ‘And I think we are now entering the ‘Age of Context’ in that we're moving out of a time when any metric is good enough and into an age where context is critical.’

Stacy Konkiel, director of research relations at Altmetric, firmly believes that more and more players across the scholarly publishing community are exploring the context behind metrics: ‘I see this trend of users, evaluators and publishers looking at normalised metrics, and not just the numbers, or even better, they're looking at the data that underlines the numbers. Users are also getting really good at using and interpreting the data with a mind towards application. For example, a humanities researcher might say, 'what can I be doing to help shape the public discourse around my research areas.’

As Konkiel points out, 2020 marks a decade since the Altmetrics manifesto was published, and much has changed. Her recent observations on what could be called a more thoughtful use of metrics go hand-in-hand with the community-wide move towards more responsible metrics, as clearly evidenced by the recent China developments.

‘I think researchers and other players within scholarly communication are all a bit burnt out on this idea that everything needs to be quantified,’ says Konkiel. ‘We've seen some of the corrosive effects of quantification and many of us recognise that we can't say we'll never use metrics, so instead we're taking a step back and thinking “How can I use metrics more mindfully and how can I use other kinds of data to help me understand”.’

And without a doubt, altmetrics are no longer a controversial topic, having become more and more normalised over time. Perhaps surprisingly, Konkiel also highlights how funders  have embraced altmetrics as a means to help them understand the research that is funded. As she puts it: ‘Some funders are even venturing into developing new metrics, dashboards and other tools that are really creative and informative. We're definitely seeing more people championing the use of altmetrics, and even more so with coronavirus. More researchers are having to communicate research online so there's been a lot more interest in altmetrics to help them understand how their research is being received.’

Taking a step back, Konkiel also sees institutions, worldwide, using metrics and altmetrics in new ways, including to manage reputations. ‘I think institutions are recognising the importance of tracking ongoing conversations around research, in terms of the profile of their overall research, or “university brand”,’ she says. ‘Also, instead of just using metrics to talk about performance, I see these institutions feeding that [information] into strategies around communications and other activities.’

New avenues

The United Nations Sustainability Development Goals (SDGs) are, increasingly, a hive of activity for metrics and altmetrics. Launched in 2015, the 17 goals aim to promote a fairer and more sustainable world by 2030, by tackling issues including poverty, health, hunger, education and gender equality.

Research into these areas is considered to be crucial to help transform the world. And, as Konkiel says: ‘[Many researchers and organisations] are looking for an alternative to your typical league tables and rankings and want to show impact and influence... these sustainable development goals are really valuable in terms of that.’

Indeed, many researchers, funders, publishers and institutions have been looking at how their research can accelerate progress towards the goals. Case in point is Springer Nature and the Association of Universities in The Netherlands, who joined forces late last year to provide data and tools to, for example, help researchers deal with societal relevance. As part of this Digital Science worked with the partners to categorise all Dutch scholarly output in the last decade into five of the sustainable development goals.

What's more, the company recently added new category filters for the seventeen goals in Dimensions, so users can filter for research relevant to these goals. Analyses revealed that of the 109 million articles in Dimensions, more than five million articles fell into one or more of the 17 UN goals. The company now intends to apply its categorisation technology across grants, patents and other data in Dimensions.

‘We're really trying to understand the level at which the university sector is engaging with the sustainable development goals,’ says Hook. ‘I believe that profiling institutions, understanding what contribution they are giving to an area, understanding the strengths... is going to be super-important [in the future].’

Szomszor concurs: ‘The 17 goals are framing the wider impact of research according to a shared agenda.’

‘We've been seeing lots of progress in reporting, for example, university contribution according to these goals and there will be a continuing focus on developing metrics here,’ he adds. ‘There is an ambition to report on the sustainable development goals at a country level, an institutional level and also with the researchers themselves.’

Looking beyond UN's all-important markers for development, Szomszor also points out to a rising interest in the socio-economic impact of research, particularly in the social sciences.  And at the same time, he also sees growing interest in knowledge exchange and commercialisation, that is, better understanding the collaborations between universities, businesses and the public sector
 
‘For example, we want to understand how you can measure the success of commercial engagement for, say, a university, or the outcomes of research that a funder has awarded,” he says. “We're going through this period of trying to understand the data and figure out how we might be able to come up with the metrics.’

But what about the ever-thorny issue of reproducibility? The ability to replicate the findings of a research publication is fundamental to any scientific method but in recent years issues have surfaced in many journals again and again. As such, many have been experimenting with metrics to measure this all-important tenet of scientific research.

One key player, Ripeta – a US start-up that joined Digital Science last year – has developed tools that use natural language processing to search research manuscripts for key reproducibility criteria. These include the presence of a data availability statement, data location, code availability and the presence of a study purpose or objective.

As Ripeta chief executive officer, Leslie McIntosh, explains: ‘What we're really looking at is the “hygiene” of a paper as that forms the foundation of reproducibility. Has the researcher shared their code and their data? With this we can then drill down into more granular areas such as is there enough information to reproduce the bench science or reproduce [the research] computationally.’

According to McIntosh,  towards the beginning of the coronavirus pandemic, her company used its tools to analyse 535 pre-prints on COVID-19 from medRxiv and bioRxiv servers: ‘To review these manually would have taken about 43 hours but we managed to do this in less than 30 minutes.’

McIntosh is also seeing her company's tools being widely used at manuscript's pre-print stage, which as she says, gives the authors an opportunity to improve manuscripts. And funders are also using the tools post-publication to assess how funds have been used and how a publication is progressing.

Still as the use of her reproducibility tools and metrics in general rises, McIntosh is, like her peers, keen to see all-important context and emphasises how users need to understand the processes behind the science. ‘I want metrics to enhance science rather than derail it,’ she says. ‘We need balance and we also need to be careful because we want metrics to help to inform the science rather than drive the science.’