OA interviews: Nicola Gulley, IOP Publishing

Share this on social media:

With the raft of policies and mandates that impact researchers and their institutions Sian Harris asks a range of publishers and publishing services companies about their approaches to open access

Nicola Gulley, editorial director, IOP Publishing

IOP is involved in three pilots – at a country level in Austria, at a subject level with SCOAP3 and at an institution level with 21 institutions in the UK. Running them together enables us to see how we balance offsets for the authors’ institutions with the requirement to offset for all customers. With hybrids we have to be very clear to ensure that we were not double-dipping. The pilots also help us learn how libraries manage OA, how they track it, and who within institutions it applies to. This helps us find out how to solve practical challenges.

The UK institution pilot evolved from work with institutions and we were approached by Research Libraries UK (RLUK). We are looking to do 90 per cent of the APC offset against subscriptions for the institutions involved and 10 per cent offset against the global subscription prices. We are trying to balance this.

For the people who are part of the pilots we collect data. SCOAP3 has a very specific requirement for what has to be fed back and we have designed ways to capture this information.

Libraries have different processes in place in different universities. There’s a call from libraries for publishers to get engaged in the conversations and look at solutions.

At the moment our default licence for OA is CC BY, but we offer a choice. Around 15 to 18 per cent of our content is OA if you remove the conference series.

We are seeing an increase in interest in our OA hybrid model. In 2011 we published four such articles, in 2012 it was 23 and in 2013 it was 90 to 100.

One concern we hear from researchers is that if funders mandate one thing or another it takes away some of their flexibility. It is hard to monitor green uptake; there’s a myriad of repositories, and new ones are being built all the time.

The main take-home message of the PEER project [looking at repository use in Europe] was that authors weren’t posting to repositories. There needs to be other mechanisms.

What researchers are really looking for is to get their article published in the best place. If there are different mandates then it gets complicated. Publishers need to ensure that we help authors to meet their funding mandates. We’ve seen little interest in text and data mining (TDM). Our policy enables us to help people who want to do TDM. People can also mine non-OA content. We want to ensure the TDM doesn’t impact other users. We ask them to notify us so that we can allocate time to ensure that not too many people are doing it at the same time.

We want to ensure that people doing TDM are not just rejected as robots and we want to understand the impact on our usage statistics.

I think there are some challenges to decide what is meant by open data. How useful is raw data? Do you need to ensure the data and its context? What is the context? Software? What about experimental parameters?

I don’t think there is a clear way forward yet. Researchers don’t see that as publishers’ responsibility but maybe in the future they will want us to link to it.