What do you see as the biggest challenge in scholarly publishing today?
The internet and digitalisation has fundamentally changed the equation in ways that we really are just now understanding, and that are going to be affecting us in decades to come. All the pressures on the system go straight back to the internet: open access as a business model, rethinking the role of the library, the move towards greater consolidation. We’re having to come up with a new model in the digital space, and that’s the biggest challenge. Machine learning and artificial intelligence will be driving these changes to the next level.
Whatever models we come up with we have to take the shift to an article economy instead of a journal economy seriously, and that really has consequences. Open access has come to be seen as the predominant business model at the article level, capturing the imagination of a lot of parties, but there’s still a lot of questions about whether the APC model is the only and best model for that.
One of the solutions is interoperability, and people are beginning to see this as one of the ways forward: we have to work better together, we have to work smarter together, we have to work more interoperably together, because it’s what customers are demanding. People are used to going online and getting a unified interface. As a discovery platform, ScienceOpen is working at this junction so we are exploring new information architectures for our digital present and future.
What can the library sector do to help?
Libraries are naturally thinking very much in terms of access, and they have been one of the biggest forces behind the shift towards open access – in many places starting or supporting open access journals and managing APCs. But there’s this strange schism of the library, both promoting open access but often not actively curating open access. Libraries are often struggling to say how much open access is even produced in their institutions, and to ensure that open access content is included in catalogues.
Institutions and libraries can be really important in the role of guaranteeing quality. More and more we see highly cited articles showing up outside of top journals, and it becomes increasingly difficult to vet quality. For the last 200 years this has essentially been a publisher’s role, organised via peer review, but as we move into a more diffused situation libraries can be instrumental in pulling together articles by their researchers, or within a certain topic, so that things don’t get missed and that poor or predatory research is not included. I’d love to see libraries work closely with publishers and research communities to explore new modes of curation.
What can researchers do to help with the challenges of scholarly publishing?
Researchers can be more savvy about things like interoperability. They can get an ORCID ID; they can use Think Check Submit (https://thinkchecksubmit.org) to be smart about finding publishing outlets; they can be smart about the kind of metadata their potential publisher is depositing with CrossRef. They need to get an awareness of the major frameworks of scholarly communication, so that they don’t miss out on opportunities as increasingly the most important readers of their work are computers.
In some ways researchers feel like this is all lots of extra work, and they are being asked to open up in ways that can feel a little bit threatening to them. But it all serves the idea of interoperability that is essential for a rapid flow of information on the internet. Researchers should see open and interoperable less as helping out their competition, and more about discoverability and having more people find, access, and build upon their research.
What can publishers do to help overcome challenges in scholarly publishing?
Publishers have been really working hard, and have done the most to address some of these things, creating CrossRef, ORCID, and thinking about sharing and interoperable standards, even when they sometimes stand in the way with restrictive access policies.
Access management will also be one of the next really big ways in which publishers can work together. Researchers want a seamless way to get content, and funding bodies want to make sure that tax payer funded research can actually get into the hands of tax payers. At the moment there’s been a relatively small amount of experimentation in making a more seamless access experience for researchers, but I think that will change.
All the pieces that we’ve been working with up until now are going to be really taken up a notch in the coming years with AI, with computers doing a lot of these things like curation, and quality control. But the basis for that is still the same: metadata is still extremely important. The Metadata2020 project is hugely important in this respect in terms of helping publishers, researchers and institutes understand the importance of rich metadata and persistent identifiers for interoperability. Publishers will really need to start thinking about computers as one of the important consumers of their content and what the implications of that is for their business.
Stephanie Dawson is CEO at Science Open