Rethinking publishing

Share this on social media:

Topic tags: 

Siân Harris reports back from the UKSG meeting held in Harrogate in April on some of the expectations of researchers of the role of publishers

‘My wish is for an interaction with the “publisher” that doesn’t begin when the science ends,’ said Philip Bourne at the recent UKSG meeting in Harrogate, UK. ‘Formulating results as research papers loses a lot of valuable information. I want to keep papers but not lose the other things,’ he added.

Bourne is a computational biologist at the University of California San Diego. He is also an advocate of open access and is involved in maintaining an important biological data resource, the Protein Data Bank (PDB).

His vision, which he set out in his conference talk, is to bring publishers into the research process earlier, perhaps at the experimental stage, so that more use can be made of the valuable insight generated during the research process.

Currently, information collected during the experimentation stage of research is spread across a wide range of resources – such as slides, posters, data, lab notebooks, collaborations and journal clubs – and little of this is accessible or useable by others. ‘I confess that I am not good at managing data in my lab. Most intellectual memory is in email folders and that is not healthy,’ he explained. And a similar picture is true of much of the open software developed by researchers. ‘Software is open, but where is it?’ he pointed out. ‘As scientists we really should be doing a better job.’

Bourne also suggested that better use could be made of rich media to help to capture these type of resources. ‘Publishers are including video at the moment but I’m not seeing it embedded where I need it,’ he said, suggesting that a video of an experiment is most useful alongside the experimental detail in the text of a paper.

‘The incentive is not there for authors to do this at the moment. Even if it got more citations, the barrier to do it is too high,’ he said. He believes that integrating rich media is more appealing for researchers to do at an early stage – for example, integrating video with posters – because at that stage researchers are still wanting to promote their research.

In addition, he commented that the technological barriers do not have to be very high. ‘We pretty much have all the technology we need in our pockets with smartphones,’ he said. ‘You can record a podcast on your smartphone, use a synch file to synchronise this with your slides and load it into a Web 2.0 environment easily. And surveys have shown that users are not particularly concerned about the quality of the video recording.’

Another approach is to use Skype. Bourne gave the example of a researcher who chatted about her research with a friend via Skype but recorded it and then posted the recording online.

Research data is also something he would like to see better use made of and sees this as something publishers can help with. ‘We are starting to have interaction between publishers and database providers, for example through ScienceDirect, but it comes back as raw data. I want to have the metadata used to create the figure in the paper too,’ he said.

There are several potential benefits of better access to data and other information that is generated during the research process, he continued. Semantic linking of data can lead to new knowledge discovery. For example, it can be used to identify when two genes are mentioned in the same paper and to map the overlap between research areas.

More details about experimental processes can also help with reproducibility. ‘When we publish papers we believe the research is reproducible. However, when we come to repeat an experiment, we usually need to go back to the author for more information or to get the data – which might not be there anymore,’ he noted.

And then there is a preservation issue that could benefit from publisher involvement: ‘Most of the preservation activity has gone into a few big datasets but there’s a huge long tail of data,’ he said.

Negative results

There is also the issue of negative data – the results of experiments that didn’t work or didn’t produce what was expected thanks to impurities, for example. As Bourne said, ‘Negative data is very important but for the most part it’s lost.’

Cameron Neylon, of the Science and Technology Facilities Council, echoed this point in his talk, noting that there are plenty of examples of papers in peer-reviewed journals that were published and then removed from the publishers’ sites when their conclusions were disproved. He sees this as a failure of the idea of publishers being the definitive filter of information. He also highlighted the concern that ‘wrong’ papers that are removed from journal sites could still contain useful information to inform further research.

Cameron Neylon, Science and Technology Facilities Council

Neylon’s talk argued the case for removing filters. He challenged the much-cited remark by Clay Shirkey that spoke about information overload and the need for good filters. ‘I disagree,’ he said. ‘Filters block stuff. They block light, limiting ability to explore.’ He questioned whether the filters are right for each user. ‘On Google you can set the filter you want; in journals that filter is hidden. ‘Ranganathan’s third law of libraries says “every book its reader”; in other words potentially every book is useful to someone. When you talk to people about the scope of their resources they get very uncomfortable about what they might be missing. The web is awash with junk but the web is also a pretty good place to find information.’

He believes that the role of the ‘gatekeeper’, which is played by publishers, was needed in a broadcast world. ‘Somebody had to choose what to collect,’ he explained. ‘Now though, content comes from all directions and that’s a good thing.’

Search goes someway to solving the personalised filter challenge, he continued, but he sees search as a blunt instrument. Instead, he championed the role of tools like Friend Feed, where people recommend resources to each other. ‘I can use other people to help me discover the information that is useful to me,’ he explained. ‘This helps me discover and enables social annotation.’

‘It’s not about centralising, nor about providing filters anymore. We are all the gatekeepers,’ he summarised. ‘Don’t think about filtering, enable discovery.’

Social tools

The use of social media was highlighted in another talk by Bill Russell of Emerald, who presented results of a recent global survey done with UCL in London into the social media preferences of 2,414 researchers. The study found, he said, quite a big gap between use and awareness. Perhaps surprisingly, the most used web-based social tools were not Facebook or any other social networking site but Skype, followed by Wikipedia and Google Docs – and 63.4 per cent of social media active researchers use just one or two types of social tools.

There were plenty of reasons for researchers to use social tools, according to the study, but the most popular were in the identification of research opportunities, finding collaborators, disseminating findings and communicating internationally.

However, ‘academic journals are where people want to disseminate their content because there it will be most read,’ said Russell, who added that social media outlets were not seen as competition for traditional ways of publicising research but great for amplifying dissemination.

This observation echoes findings from a recent study by Research Information Network (RIN). According to Michael Jubb of RIN, ‘Researchers use tools like GoogleDocs in the research process but while they say they use social tools for dissemination they do not actually do so.’

Philip Bourne agreed: ‘As a researcher I use social tools for collaboration but not for dissemination but I believe it has to change because the process is broken. We need more quantitative measures.’ – a view echoed by Cameron Neylon, who concluded, ‘Researchers very much trust traditional processes. The problem is that peer review doesn’t actually work. We need to develop systems to evaluate information better.’