Reviewing peer review

Share this on social media:

JISC’s executive secretary, Malcolm Read, comments on the recent UK report into peer review and the role of data in evaluating research quality

Last month I was asked to submit evidence to the UK parliament’s science and technology committee for its report about peer review. Although most researchers agree with the principles of peer review, many feel there is room to improve how that process is implemented. JISC is already looking into additions to the current system, such as open peer review, including funding universities to develop open-access academic journals which are compiled from other openly-available material.

The committee’s report, now available online, describes access to data as ‘fundamental’ for researchers to reproduce, verify and build on each other’s results. It highlights that reproducibility should be the gold standard that all peer reviewers and editors aim for when assessing a manuscript. This spirit of openness is something that JISC supports, through its work with the UK Research Councils.

But researchers will also be aware of the challenges of making their data sets available to reviewers. In the longer term, there are technical and economic challenges associated with making data available, especially as different types of data require different treatment. Much of the social data that is created through longitudinal studies makes sense only if the entire length of the study is available. Other data, such as environmental data, must be kept permanently available; you can't repeat an earthquake.

Keeping data like this available in perpetuity is expensive and could end up as a cost that universities across the world simply could not afford. JISC is funding Dryad-UK, which mirrors a US initiative to help researchers control who sees their data and when. Researchers can deposit their data in a secure environment and then give privileged access to reviewers, for example, and then make the data available more widely once the article is published.

The committee report rightly highlights that we need to encourage researchers to deposit their data, rather than enforce it. Researchers deserve credit and recognition for working in an open way. It’s a complex situation, so a blanket mandate on open data might not be feasible but we need to create the right conditions so it is easier for researchers to make their data openly available.

However, in some areas of science the huge amounts of data are produced by instruments and programs. For example, storing indefinitely the mass outputs of Large Hadron Collider experiments is unfeasible. So we have to make it possible for people to re-run an experiment. This is something that requires knowing quite a lot about the machine and the environment it is running on in order to understand the data. As long as the researcher defines the input conditions, and what the program is designed to do, you should be able to trust the outputs. And if we ensure that authors make clear the nature of the program they are running and the algorithms then people in the future should be able to re-run the program to retrieve the data. I’m hinting of course that beyond the software or statistical test is something much more fundamental - the accuracy of the data itself.

As part of the report’s consultation process I was asked to comment on whether, in some cases, peer review might encourage a tendency towards conservative judgements among reviewers. But peer review in one form or another has been an underpinning aspect of research since before journals as we know them existed. As the report highlights, journal editors are well-placed to find the right experts to review unusual research, so the process itself cuts against the conservatism. Online publication affords us a great deal more flexibility in how the peer-review process works. In the past, two different forms of assessment have happened as part of a single process. Both the technical assessment and the impact assessment have happened after publication of a research paper. But separating the two processes is important because of the longer time scale over which you get your answer – so it’s encouraging that the committee’s report recommends a pre-publication technical assessment.

Another criticism sometimes levelled at peer review is that it is a burden on academics who have to do this in their own time. But few researchers feel they have a nine-to-five existence anyway. In my experience, reviewing manuscripts outside of a regular working day is part of the mentality in the research community. The situation would only become worrying if researchers had to spend more time on peer review proportionate to their own research. Greater transparency in the process might also help garner more recognition for peer reviewers, by ensuring that an individual reviewer's work is known to their peers.

Malcolm Read is executive secretary of JISC