A global shift in priorities affects how publishers must think about the structure of academic research, says Martin Szomszor
Every stakeholder in the scholarly information supply chain has an interest in understanding the research landscape.
Governments and funders want to maximise the economic impact of their research strategies. Research institutions want to draw on their existing strengths and align their investments with funder priorities. Publishers are looking for the next emerging field that might support a journal or book or identify titles for consolidation. Researchers themselves need to know which fields are bearing fruit, and everybody wants to know which researchers and institutions are leading now and who the likely leaders of the future will be.
The challenge lies in that the way research is done is changing, creating something of a moving target for classification and assessment.
The traditional way to measure the importance of research has been through citations. The argument has been that citations indicate the degree to which other academics have made use of a piece of work, which ought to indicate its quality or impact.
Over the past decade or so, a more nuanced view has emerged of how to assess research. Within the broader community, we are coming to a consensus that quality, attention, and impact are different things, even if we can’t quite agree on what impact actually is. Endeavours like the metrics tide from HEFCE and the Leiden manifesto demonstrate an appetite to avoid the mistakes of the past and use metrics and indicators responsibly and in context to support strategic decision making, rather than ceding judgment to a number.
There has been a lot of progress towards using non-citation based indicators for research assessment. In the UK, as part of what’s known as the Impact Agenda, the research excellence framework (REF) is conducted by the Higher Education Funding Council of England (HEFCE). It uses impact statements as a basis for awarding block grants to universities in the UK. Impact statements do not rely on citations or the publication of papers in prestigious journals to establish the value of research. Instead, they attempt to capture the economic, social and societal benefits of research in as direct a way as possible. In Australia, the Excellence in Research for Australia (ERA) incorporates a similar approach. In the Netherlands, there is the Standard Evaluation Protocol (SEP) and numerous other approaches in various states of maturity worldwide.
The rise of the Impact Agenda in the UK, and similar movements globally have added impetus to a structural change that is happening in global research. Researchers and institutions are increasingly focusing on specific problems that societies face (such as dementia prevention or global climate change) in order to be more impactful. As a result, interdisciplinary and multidisciplinary projects are gaining prominence and moving from what might be an interesting side project to being a central strategic objective. Adding to the complexity of the challenge is a blurring of the distinction between academic disciplines and research topics. A perfect example of which would be neuroscience, which draws on cell biology, molecular biology, microscopy, electrophysiology and a host of other disciplines. At the same time, today you can study it as a specific degree at many institutions and there are numerous journals supporting the field.
This change in research structure creates a challenge for classifying research for assessment and identifying emerging research fields for strategic purposes,such as identifying a new field that would support a journal or book or even a new research centre.
Traditionally, research is thought of as fitting into a specific disciplinary category, such as physical chemistry, or atomic physics; exclusive taxonomies have been constructed to classify research, like the Australian and New Zealand Standard Research Classification (ANZSRC) Fields of Research (FoR) codes. When projects, and even research centres, are oriented around specific problems requiring a multidisciplinary approach, which category should they be put in? An argument might be simply to apply more than one code, but then the overriding theme of the research is lost, particularly if it’s a new topic of interest which is not already in a taxonomy. To put it another way, it’s hard to identify a new research topic for a research centre or journal title, if everything has to be classified based on old topic names.
For this reason, we must not only rethink how we assess research, but also how we classify it. Traditional taxonomies were very powerful and appropriate when research neatly fitted into particular disciplines that were reasonably static, but today, a more nuanced approach is needed. This new approach must account for disciplines, ideas, problems, techniques and themes on an equal footing. What is more, it must be flexible enough to allow for new ideas to emerge, rather than trying to impose a pre-existing structure.
Martin Szomszor is a consultant data scientist at Digital Science.