Spotlight on regional research assessment

Share this on social media:

Clarivate, a global leader in providing trusted information and insights to accelerate the pace of innovation, today released an in-depth look at formal regional research assessment, co-authored by the Institute for Scientific Information at Clarivate, together with esteemed industry partners: Kate Williams, University of Melbourne; Jonathan Grant, Different Angles; Lutz Bornmann, Max Planck Institute and Martin Szomszor, Electric Data Solutions.

Research assessment: origins, evolution, outcomes examines the origins of research assessment, and how it works in different regions via the approaches of Australia, Canada, Germany, Hong Kong, New Zealand and the United Kingdom. It also considers the future of research assessment exercises and examines the potential of artificial intelligence (AI) to replace traditional peer review. Despite the differences in their approaches to research assessment, variation in their links to funding incentives and disparity in timing between similar systems, all the regions examined improved in comparative research performance, as measured by bibliometric performance. There is, however, no clear universal verdict on whether research assessment is a necessary or facilitating agent.

Jonathan Adams, Chief Scientist at the Institute for Scientific Information at Clarivate explains: “Research assessment has had major effects on institutional structures. It has unquestionably had pervasive effects on researcher behaviour: demonstrable in the UK and widely reported elsewhere. The most important feature of any assessment system should arguably be the extent to which it attracts and retains the confidence of the researchers.”

The global research report, 'Research assessment: origins, evolution, outcomes' finds that: 

  • Australia has a comprehensive research assessment, seeking to measure both academic impact and wider societal benefit. Australian methodology distinguishes engagement from impact, in contrast to other research impact evaluations throughout the world, such as the United Kingdom's REF, but it does not influence direct research funding and may be unconnected to citation-indexed research performance (Kate Williams, University of Melbourne).
  • Canada has a long history and culture of integrating knowledge mobilisation and evaluation across the research life cycle and focuses on 'knowledge mobilisation' in specific research areas rather than assessing general research outcomes. (Jonathan Grant, Different Angles). 
  • Germany has promoted its research status using 'Excellence Initiative' block funding to research organisations without regular nationwide evaluations (Lutz Bornmann, Max Planck Institute). 
  • While Hong Kong’s research assessment system is similar to the UK model, it draws on a distinctive conception of scholarship and on socio-economic benefit as well as excellence. 
  • The introduction of New Zealand’s performance-based research fund can be associated with a marked improvement in its internationally comparative research performance.
  • The United Kingdom set the first model for regular research assessment, which has had pervasive effects on institutional management and on researcher behaviour. 

There have always been demands for technical solutions to reduce perceived assessment bureaucracy and the report acknowledges that AI has a profound impact on research but machine-learning solutions to assessment burdens may propagate existing biases. Models of assessment outcomes reveal that apparently important predictors may link to factors unrelated to research impact.

Martin Szomszor, founder of Electric Data Solutions: “What this debate has made clear is that both the research system and the data we collect about it capture many forms of prejudice relating to gender, ethnicity, nationality, sexuality, age and more. Without proper consideration of these, machine-learning solutions will only propagate these existing biases. This is a problem that is already familiar to those who make use of bibliometric indicators and an issue that has been at the forefront of the responsible metrics agenda.”

Jonathan Adams concludes, “Our report demonstrates there are many challenges, common to many regions. Research is a very long game, so assessment stability has great merit and, whatever the criticisms, the RAE/REF remains much as it did thirty years ago, with impact case studies added on.”