Moving towards the big picture

Share this on social media:

Topic tags: 

As metrics continue to condense research, data industry players are seeking a broader view, reports Rebecca Pool

Earlier this year, the UK-based Institute for Scientific Information, Clarivate Analytics, published a report called Profiles, not Metrics.

Highlighting the critical detail lost when data on researchers and institutions are condensed into a simplified metric or league table, it laid out alternatives to academia’s well-versed h-index, Journal Impact Factor, average citation, as well as university ranking.

The angst around single-point metrics is hardly new, but as report author and head of analysis at ISI, David Pendlebury, puts it: ‘This is a topic of constant concern to us. There’s nothing necessarily wrong with any measure, but a single measure is a limited measure and it’s a failure even before you begin, to expect that measure is going to give you an adequate representation of performance.

‘With this report we wanted to take well-used indicators and show how you can go beyond these easy-to-obtain measures and get a deeper understanding with a little more work.’

A case in point is the beam-plot, described as an alternative to the h-index in the report. While the h-index reduces a list of publications and citation counts to a single number, the beam-plot provides more context. Here, each paper’s citation is normalised by the average for journals in the same category and publication year, with that value being converted to a percentile for straightforward comparison to those from other papers. Importantly, this beam-plot provides a single picture of a researchers’ output and impact, showing how it varies in a year, and evolves over time.

‘Two researchers can have, for example, the same h-index, yet their publication and citation history can be very different,’ says Pendlebury. ‘But [with our alternatives] you typically have a graphical representation of a variety of data points that gives you a much better understanding of the meaning, than the single point measure.

‘Unless you look at the underlying research papers, how often these are cited and how many there actually are, you can be misled... this is a big part of the message we’re trying to get across.’

Metrics experience

Pendlebury has been using Clarivate-owned Web of Science data to study the structure and dynamics of research since the early 1980s. He worked for many years with JIF Godfather, Eugene Garfield, and was instrumental in developing Essential Science Indicators, a tool to identify the top-performing research in the Web of Science Core Collection. As he jokes: ‘I’m ancient, but I’m not that ancient’.

Yet throughout his years of experience, concerns have remained. ‘There are many different questions and answers that can be derived from our data but the important part is to derive an answer using the right data that is appropriate to the question,’ he says.

‘It is a constant educational process with customers and other players that use our data from time to time,’ he adds. ‘You really have to understand how data is collected, what they mean, and whether they mean something in a particular context.’

What’s more, the rising use of metrics to assess research impact doesn’t sit easy with the ISI head of analysis. He emphasises the prime aim of ISI’s database and citation data has been to retrieve information, not to assess research impact, as is increasingly the case. ‘Very often these singe-point metrics are used for evaluation and consequently [academic] reward, appointment, promotion and grant support... but citation should follow good research and not the other way around. 

‘While much of our activity is associated with quantitative evaluation, and data, papers, patents, citations... we have never asserted that this gives a complete picture. This is a picture of one aspect of an academic’s life.’

Mike Taylor, head of metrics development at Digital Science, echoes Pendlebury’s sentiments. Pointing to the ‘basket of metrics’ approach recommended by the Leiden Manifesto in 2015, he highlights: ‘This is an ongoing conversation and it’s really hard to think of a single number that could answer all the questions you would want to know.

‘The sorts of questions that you would ask an experienced researcher are very different to the kinds of questions you’d like to ask an early-career researcher... we shouldn’t be using the same hammer to crack several different nuts.’

However, both Pendlebury and Taylor are seeing change. Also highlighting the Leiden Manifesto, Pendlebury highlights how an increasing dependence on measures coupled with less reliance on peer review prompted scientometric researchers to provide advice on the best practices in research evaluation: ‘It is the responsibility of those who evaluate [the research] to change what they look at and even what they ask for.

‘Instead of asking for an h-index, the academic administrators could ask for a range of publications and citations that take into account the stage that a researcher is at,’ he adds. ‘This is where we would hope to see some more change.’

Meanwhile, Taylor reports having increasingly sophisticated conversations with grant providers and funders: ‘These conversations are no longer just about citation rates, we are now talking about the broader impact.’

Liz Allen, director of strategic initiatives at open research publisher, F1000, concurs. As she points out, F1000 is also beginning to work with funders – in this case on how it can provide services to effectively publish funded research: ‘In the past, publishers have not worked with the funders, and funders haven’t really worked with industry, so historically the way we have all carried out science has been quite fragmented. But this definitely is changing.

‘For example, with the advent of [systems] such as the REF [Research Excellence Framework], we have recognised the importance of collaboration between researchers and funders,’ she adds. ‘So publishers are now also working more closely with the funding agencies, to make sure they can support what these institutions actually want.’

Collaboration aside, Taylor is certain that across the board, from funding provider to researcher to publisher, overall knowledge on metrics is also getting broader and expanding. Immediately following the release of the Leiden Manifesto and then the San Francisco Declaration on Research Assessment, DORA, he believes that the general scholarly community wasn’t fully engaged with these cautionary warnings over metrics use.

But fast-forward several years and he sees a difference. ‘Now, there is much more acceptance that the use of metrics is a lot more complicated than originally thought, and I see this as a “coming of age” for responsible metrics,’ he says.

Indeed, awareness on the need for responsible metrics use has been rising in recent years. And for Taylor, responsibility has to be exercised throughout the entire scholarly community.
‘Responsibility starts with the person asking the question and ends with the person who answers that question,’ he says. ‘We have to make sure that we have that context to communicate this all the way along the pipeline, and in this way metrics are being used to enrich our world, rather than simplify it.’

For its part, Digital Science provides detailed information, examples and references to the algorithms it uses to calculate its metrics. What’s more, API access is given to accredited scientometric researchers for the products, dimensions and altmetrics, via researcher access programmes.

‘We do this to support researchers to understand what the data is all about, so when people use our data, they really have confidence in its citability,’ says Taylor.

According to the Digital Science head of metrics development, as well as an increasing sense of responsibility, the rise of metrics is bringing about a new group of professionals whose careers are based on using metrics.

‘These are not the researchers that have PhDs in scientometrics, but they are practitioners that sort through and understand the numbers to, for example, carry out comparative analysis,’ he says. ‘In short, metrics is becoming a profession, and we also need to take responsibility for the ongoing education of this group of professionals.’

Help is already at hand following the development of The Metrics Toolkit. Designed to help researchers and evaluators understand and use citations, web metrics and altmetrics responsibly when evaluating research, the resource was put together by Stacy Konkiel at Digital Science, Robin Champieux, Oregon Health and Science University, and Heather Coates from IUPUI University Center for Digital Scholarship.

Crucially, the online resource is home to explanations of metrics, such as the h-index, and details how these measures should, and should not, be applied in evaluation scenarios. Examples of how to use metrics in, say, grant applications, are also provided.

‘This is designed to be a classic resource for metrics users finding their feet, but it’s been one of those developments that everyone has been talking about,’ says Taylor. ‘We saw this take place in the field of data preservation with its ‘data management 101 courses’, and I think that metrics is now part of the same phenomenon.

‘This is a real example of the community coming together to collaborate and donate time to support the development of metrics professionals,’ he adds. ‘I do think that in the coming years we will see more of this, and perhaps have more practitioner-level degree qualifications being offered here, as every education institution is going to have [employees] working at this level.’

But as recommendations, manifestos and attitudes to metrics mature, is the scholarly community seeing the bigger scientific research picture? Just as ISI and Pendlebury believe there is room for looking beyond, say, the h-index and JIF, Allen thinks industry should move toward a metrics system that provides a wider view.

Historically, researchers have published their knowledge in journals, spawning the rise in journal-based metrics now instrumental to much scholarly decision-making. However, the advent of alternative information, such as datasets, blogs, white papers and policy documents published outside of journals, changes this.

‘We just don’t have to fixate on journal-based metrics any more,’ says Allen. ‘Yet many altmetrics indicators are generally still based on what’s published in journals... and we’ve lagged behind in developing an infrastructure that can really aggregate this [alternative] information.’

Still, as Allen says, many in the community are creating databases for a wider range of research outputs, while more and more pre-prints are published. ‘People are waking up to the fact that where there is a digital footprint [for a result or research], if we are savvy, we can start to build a system that looks at research output more holistically,’ she says. ‘And we also need to ask, how can we make sure that what’s good for science is also good for scientists?’ 

What about measuring reproducibility?

The ability to replicate the findings of a research publication is fundamental to any scientific method, and as Digital Science chief executive, Daniel Hook, recently stated: ‘If research is to remain trusted by the population who fund it, then researchers must ensure that their research can be reproduced by others.’

Indeed, in recent years, reproducibility issues have surfaced across several scientific journals, and as such, metrics to measure this all-important tenet of scientific research are emerging. Digital Science recently invested in two US-based start-ups that are developing systems relating to the reproducibility of scientific methods. Ripeta focuses on assessing the quality of reporting and robustness of a scientific method, and is developing tools that use machine learning and natural language processing algorithms to extract reproducibility elements from research articles.

Meanwhile, Gigantum is building a platform that allows researchers to keep pace with the data objects in their research papers, so anyone can repeat their data processing and analysis.
‘We have lacked the technology to create elegant infrastructures to support reproducibility,’ highlights Hook. ‘But with Gigantum and Ripeta we see two technologies that cover completely different aspects of reproducibility, in a way that supports, rather than encumbers, researchers’ work.

‘While measurement can be important, establishing new behaviours of it can also be detrimental,’ he adds. ‘We need to be quite careful how we think about measurements in reproducibility.’

Links:

ISI Report: https://clarivate.com/g/profiles-not-metrics/ 
Metrics Toolkit: http://www.metrics-toolkit.org