The challenge of measuring scholarly communication

Screenshot 2021-10-21 at 15

There have been huge changes in scholarly communication in recent decades with significant implications for how it is measured. 

Not only has the digital transformation of communications enabled a rapid expansion in the amount of data that can be collected, but there have also been changes in the research itself. 

As Jonathan Adams, chief scientist at the Institute for Scientific Information (ISI), noted, there are now many more countries publishing in Anglophone journals, new journals, new fields and new subfields have emerged, and there’s more international collaboration and a shift towards the North American research culture, with shorter journal articles focused on one key element. This all has implications for measuring research.

The big change, however, has been a widening in what people want to measure. There has been a move from research excellence in purely academic terms, towards research achievement in more societal and economic terms as well. 

As Tony Roche, CEO of Emerald Publishing, puts it: ‘Funders and policy makers are putting more than academic influence on the table. They want to see more from research than great research. Great research is absolutely critical and fundamental, but researchers also need to demonstrate how their work moves beyond the academy.’

Beyond bibliometrics and citation analysis

The desire to measure the impact of research beyond academia is perfectly understandable. Governments and taxpayers want to know that money is being spent wisely, and in the best way possible. 

The problem, as Adams explained, however, is there are no good indicators for measuring societal impact analogous to citations, and the timescale for impact is much longer: ‘There really is a paucity of appropriate indicators, and part of that is the nature of research projects themselves. Each project is individual in the way in which it delivers outcomes. What research does conveniently is it gets reported by one academic to other academics in the form of papers, and those papers if used get referenced and cited, and so that’s been a very convenient currency for decades. 

‘Many policy makers have assumed that a similar currency can be found in other areas of research activity, but we don’t have the history of evaluation through those other routes that have established a proper grounding for what the indicators might be. If you consider the pipeline between research and its broader outcomes, then it can be a very long one, and very variable. When we look at the initial research associated with the impact case studies reported in the 2014 REF, some of that research was 20 years old.’

Of course, that is not to say that there have not been useful and interesting metric developments, moving beyond traditional bibliometrics, it’s just that they are a long way from providing the sorts of robust indicators that governments, policy makers, and institutions increasingly want for evaluative purposes. 

As Roche explained, there’s not a one-size-fits-all solution in these desired metrics, but we need to find the right metrics for the particular research community: ‘Publishers are now reporting on speed of publication, usage of content, social media attention, and more recently a number of service providers and publishers are looking at policy implications and uptake of research. 

‘All of these things are useful, and as a sector we are supporting our customers, our authors, and our researchers with more tools and more services. The key point is that we need to work very closely with the research community itself. It’s all about creating the right resources for the right domains, because unlike the impact factor which is this generic one size fits all, a lot of these research metrics need to be honed in their appropriate domain. 

‘Unfortunately, there isn’t a simple formula to measure the reach, influence, mobilisation of knowledge and impact in society. The journey to impact is far more heterogeneous than something measured by an impact factor calculation.

‘We’re all looking for the same sort of solution that we got with bibliometrics, and I don’t think it exists. We need a far more nuanced and sophisticated approach using both quantitative and qualitative approaches. A combination of the classic quantitative measures, combined with storytelling, narratives around how research makes that journey to broader society. That can’t be done in a number.’ 

Unrealistic expectations

The problem is, of course, not that there are things that metrics just can’t tell us – but that there are a lot of unrealistic expectations around what metrics should be able to show and how quickly solutions can be found. 

It’s not just that governments want to find ways of improving the socio-economic return on investment from taxpayer funded research, but part of a whole shift in the use of metrics in society, with league tables promising to show how an ever increasing number of different types of organisations compare. In such an environment the notion that there may be limits to what can be meaningfully quantified, or the speed with which solutions can be found, is incomprehensible. 

Inevitably it can also lead to a lack of investment and dismay when reliable figures don’t come easily. As Adams noted: ‘There was an expectation, or wishful thinking, on the part of some people that given time and pressure on academics and others that some ready answers would be produced, but really there has been very little serious investment in developing indicators.

‘Government is in no position to think about how such indicators might be developed, and how they might vary, and what’s realistic and appropriate. It must come back to the research community itself, but they are not going to do it unless there’s engagement with the government side, and a dialogue with how this might work, what is significant, and what the benefits are to the community. But no such engagement exists. 

‘Instead we just have this hand waving from the government side, about “academics should be doing something about this, we’re looking for indicators, and people are expected to produce these things”, but that won’t wash. It’s got to be a much more engaged process before we get anywhere with this. It’s got to be recognised that there’s going to be differences, it’s not going to be one simple catchall indicator that gets everybody.’

Adams continued: ‘People look for simple quick answers, and there is an important issue to tackle here about what research investment actually delivers for the tax payer, and the only way of assessing that properly is to put time and effort and resources in to actually develop a proper picture of the landscape and how it works. There is no quick fix. Governments aren’t in place long enough to take that seriously.’

Of course, as soon as a metric is introduced, it can very quickly stop becoming a useful metric. There is a need for time to see how the research community responds to new metrics, with different institutions having very different budgets that are available for maximising any new metrics that might be introduced. 

As Roche noted: ‘If you take, for example, metrics and social reach and influence, the cynic might say the researchers with access to the best marketing budgets can optimise the chances of maximising reach. Does that really speak to the wider influence of the research, or does it speak to the better funded researchers in the global north at research intensive institutions having more resources that they can put at their disposal so they can help tell the story?’

Moving on from simple numbers

Although there are undoubtedly unrealistic expectations, there are nonetheless steps that can be taken, and are being taken, to get a better understanding of scholarly communications. As the scholarly publishing community has shown in the way it has responded to the numerous changes over the last thirty years, it can be very responsive to the needs of the community. 

As Roche points out, while at this point there’s unrealistic expectations around research metrics, there’s a brilliant opportunity, if the sector works together, to develop metrics in a much more sophisticated way: ‘Our whole mission is around equity and a fairer and more equitable environment so that all researchers can participate and generate impact, and create change as a result of their work, rather than just get published and then move on. 

‘We created an impact manifesto back in 2018 and the key thread of that is championing alternatives to traditional academic metrics and traditional rewards. Earlier this year we relaunched this manifesto to pledge six new commitments for change that will advance ongoing debates around academic culture, incentives and research evaluation, and to call for sector wide action through our ‘Are you in?’ campaign. 

‘We signed DORA in 2018 and we support a number of initiatives that really promote fair holistic evaluation. Initiatives like The Hidden REF, which recognises all research outputs, and the fact that far more people contribute than the people you see listed on a paper. We have an awards program that really showcases, and rewards innovative research that is not just publishing highly cited articles, but actually research that can affect positive change beyond the academy.

‘Our platform is designed to carry lots more than research papers, but the research community are still rewarded on the paper or the book. If they were to utilise video assets to generate uptake with particular stakeholder groups, that allowed application and real change, that’s where research metrics could get really interesting. Technology is part of that solution, but it only works if your communities engage with you on it. 

‘We’re also working with other publishers. It is a competitive environment, but we do need to work together because we are serving the same research community at the end of the day, so if we can help to develop standards in these areas, that’s a win for everybody and will make adoption by researchers easier.’

As more data is collected, and we more fully understand the complexities of scholarly communications, there is inevitably a need for finding new ways to represent that data and complexity. 

As Adams notes: ‘There’s a shift away from looking at just tables of numbers, towards graphical analyses, which provide much more information about what’s happening. The only way to get people to appreciate the complexity of the research environment is to see a picture that involves multiple indicators in one go.’

Conclusion

The broadening from wanting to understand the academic impact of research to also wanting to understand the socio-economic impact of research is to be welcomed, but while we undoubtedly have access to more data than ever before it’s not clear that we are any closer to getting a true picture of scholarly communications. 

The temptation to overly reduce the complexity of the research ecosystem will always be there for the purposes of policy making, management decisions, or the increasingly ubiquitous ranking tables, but there are no quick fixes to the problem of measuring research impact. 

Meaningful metrics have to reflect the complicated and diverse nature of the research, and they will only emerge when all the different parties work together to discover what is important to their particular community. 

We are a long way from getting simple answers, but we are increasingly able to see the complexity of the problem. 

Back to top