Making decisions with confidence: the importance of editorial integrity

Share this on social media:

Nandita Quaderi

Nandita Quaderi, editor-in-chief of the Web of Science at Clarivate, explains what editorial integrity means for citation data and journal evaluation

The research landscape grows ever-more complicated, year on year. According to the latest data from the 2018 STM report, across the world there were three million papers published across 42,500 scholarly journals. The use of pre-print servers has exploded during the COVID-19 pandemic. There are 1,059 open access mandates. Last, but by no means least, fraud in academic publishing, from plagiarism to image manipulation and citation distortion, has sadly become more sophisticated. 

All of this makes it harder for the research community, which includes researchers, editors, librarians, institutions, societies, publishers and funders, to get the information they need to make better-informed decisions with confidence. This is particularly relevant when it comes to selecting journals and evaluating their performance. 

For the Web of Science, we apply three levels of curation to the huge amount of data we process to ensure that users can rely on the datasets and metrics produced. First, we have a global team of in-house, publisher-neutral editors, with deep category knowledge, who apply our rigorous evaluation process to select and continuously curate what goes into our journal collections. At a minimum, journals must pass our 24 quality criteria that select for editorial rigor and best practice; journals that don’t pass are not indexed, even if they are highly cited. Then, our dedicated content team, experienced with journal entity management, curates the journal records and the data within them that are crucial for accurate journal citation attribution. Finally, our bibliometric experts ensure confidence in our metrics by alerting the community where anomalies appear in the citation data and, if necessary, suppressing journals from the Web of Science Journal Citation Reports (JCR).

The JCR provides a thorough, multifaceted view of journal performance, showcasing the world’s highest-quality scientific and scholarly literature. 

Each journal profile in JCR provides a rich array of metrics and descriptive data. This, of course, includes the Web of Science Journal Impact Factor™, the Immediacy Index, and the journal’s rank in category, to name just a few. This year we have introduced descriptive data that reveals the extent that gold open access content contributes to a journal’s overall volume and citations. 

Although you might be accustomed to thinking of editorial integrity in terms of corrections and retractions of academic articles, we are laser-focused on applying editorial rigor when it comes to selecting our content and creating our citation data; you cannot make decisions with confidence if you believe the system has been 'gamed', either inadvertently or intentionally. Distortion of citations is harmful to the scholarly record because it creates an inaccurate reflection of the connection between articles and their contribution to the scholarly network. We monitor and suppress the data for journals that demonstrate anomalous citation behaviour, including where there is evidence of excessive journal self-citation or citation stacking. We do not assume motive; our approach is to investigate the data, engage with publishers, and present the results for the community in the annual JCR release.

Working with our colleagues across the Institute for Scientific Information, we have updated the methodology and parameters we use to detect journal self-citation. This year, we have suppressed 33 journals (which represents just 0.27 per cent of the journals listed) to support the integrity of the Web of Science JCR metrics.

An ‘editorial expression of concern’ has also been issued for 15 journals where we have identified one or more published items containing an unusually high number of citations that make a disproportionately large contribution to the journal’s JIF. This activity is not detected by our journal self-citation screen, as this is based on overall levels of journal self-citation and does not look directly at the degree to which journal self-citation is concentrated into the JIF. However, we will continue to review content of this type with the goal of developing additional screening for distortions of the citation network and the Journal Impact Factor. And we continue – at all times – to stress that the metrics contained within the JCR should be used in context, as part of a broader understanding of journal profiles, and not misused to assess researcher performance.  

As the founder of the Web of Science, Dr. Eugene Garfield, said in his preface to the Journal Citation Reports in 1975: 'Like any other tool, the JCR cannot be used indiscriminately. It is a source of highly valuable information, but that information must be used within a total framework proper to the decision to be made, the hypothesis to be examined, and rarely in isolation without consideration of other factors, objective and subjective.'

Forty-five years later, we’re still working to provide the research communities with the high-quality data and metrics they need to make confident, data-driven decisions that will accelerate innovation and improve our collective futures.