Evolving to protect the integrity of the scholarly record

Nandita Quaderi celebrates 50 years of Journal Citation Reports, and digs deep into the newly launched 2025 edition
1975 was a big year for technology and culture in world events. Microsoft was founded; the film Jaws was the highest-grossing picture at the box office; and the first digital camera was invented. That year also saw the first edition of the Journal Citation Reports (JCR).
It’s fair to say that the world looked very different 50 years ago, as did scholarly publishing. That first edition of the JCR contained just a handful of metrics using data from only 2,630 journals.
Over the years the JCR has moved from print, to CD, to its current incarnation as a digital journal intelligence platform. Its comprehensive profiles contain a wealth of metrics, visualisations and descriptive data for 22,249 journals.
However, even 2,630 journals presented a challenge for the community to keep up with in the pre-digital age, and our founder Eugene Garfield, along with his colleague Irving Scher, recognised the importance of developing a metric to measure the impact of scholarly journals. The Journal Impact Factor (JIF) was created in the 1960s to help select journals for the Science Citation Index. After its inclusion in the JCR, the JIF – alongside other JCR data – was adopted by librarians to support collection management, by publishers to aid journal development, and by researchers to help decide where to publish.
A changing scholarly publishing landscape
In some areas, the misuse of bibliometrics has created a culture in which increasing the number of publications and citations has become a goal in itself, as quantity is rewarded over quality.
This has fuelled a rise in fraudulent behaviour and the proliferation of entities such as papermills that exploit the pressure to publish and be cited. This isn’t limited to so-called predatory journals; papermills also target reputable journals with compromised content.
This means that sadly, no journal can be completely safe from research integrity issues, and it is harder for research funders, librarians and researchers to know which journals they can trust for quality content.
The terms ‘high impact’ and ‘high quality’ are often used interchangeably, and while this may have been justifiable in the 1970s, these two terms are no longer synonymous; it is our view that high quality should now be associated with trustworthiness instead of high impact.
From impact to trust
As the landscape has changed, the Web of Science and JCR have introduced a series of new policies and practices to help keep compromised content out of our products, preserve the integrity of our metrics, and help the community identify which journals they can trust.
Recent developments include:
- 2018: we introduced more transparency around the Web of Science selection process, selection policies and criteria.
- 2021: we included AHCI and ESCI journals in the JCR and introduced the field-normalised Journal Citation Indicator (JCI) to allow responsible comparison of journals across disciplines, including the arts and humanities.
- 2023: we developed new AI tools to help our editors identify which indexed journals need to be prioritised for re-evaluation and possible de-listing and introduced transparency into which journals are de-listed.
- 2023: we extended the JIF to all journals that have passed our quality criteria and are indexed in the Web of Science Core Collection. In doing so we evolved the JIF from being a journal-level marker of scholarly impact in the science and social sciences to a marker of trust and scholarly impact across all disciplines.
- 2024: we introduced new unified rankings across our science and social science subject categories, sending another strong signal that journal quality should be equated with trustworthiness rather than high citation impact.
Balancing innovation and research integrity
Traditionally, peer review is followed by an editorial decision to accept or reject a manuscript for publication – only manuscripts that are ‘validated’ by peer review are published. Our long-standing policy is to index journals following this model from cover-to-cover, enabling us to calculate journal-level metrics.
However, new models have emerged where all content that has been subject to peer review is published alongside the peer review reports, even if reviewers express serious concerns regarding the validity of the content. We welcome experimentation, but we need to balance our support of innovation with our commitment to providing our users with trustworthy content and reliable metrics.
We have therefore introduced a new policy that is innovative in itself, to partially index journals that de-couple publication from peer review. For these journals, evaluation and coverage (subject to passing our quality criteria) will be limited to the subset of published articles where the content has been validated by peer review.
However, it would be irresponsible to attempt impact evaluation, or provide journal-level metrics, based on a subset of a journal’s publications. As a result, partially indexed journals are not eligible for impact evaluation – and therefore not eligible for coverage in SCIE, SSCI or AHCI – and do not receive a JIF or any other journal-level citation metrics.
Treating retractions with caution
Articles are retracted for many reasons – ranging from honest mistakes to intentional manipulation – but whatever the reason, retractions play a crucial role in maintaining the integrity of the scholarly record and should be applauded and encouraged. They serve as a mechanism for self-correction, ensuring that erroneous or fraudulent results or conclusions do not mislead the public, policymakers or the scientific community, creating an unsound foundation for future behaviour, policies and discoveries. But citations to and from retracted content naturally need to be treated with caution.
Marking the 50th year of the JCR, we continue our commitment to maintaining the integrity of our metrics by introducing a small but important policy change.
This year’s release excludes citations to and from retracted content when calculating the JIF numerator, ensuring that citations from retracted articles do not contribute to the numerical value of the JIF. Retracted articles will still be included in the article count (JIF denominator), maintaining transparency and accountability.
Because the current volume of retractions is low in comparison to the overall volume of publication output, this change impacts the JIF of only 1% of journals. However, the increasing number of retractions prompted us to act early and introduce this policy before citations associated with retracted content led to more widespread effects.
Looking forward
The evolution of the Journal Citation Reports over the past 50 years reflects the dynamic nature of research and academic publishing, and all members of the scholarly community have a role to play in safeguarding research integrity while fostering a culture of trust and innovation in academic publishing.
As we move forward, we’re doing our part to uphold the integrity of the scholarly record through rigorous selection and data curation, embracing transparency and innovating to accommodate new publication models.
Nandita Quaderi is Senior Vice President and Editor-in-Chief at Clarivate Web of Science
Do you want to read more content like this? SUBSCRIBE to the Research Information Newsline!