Fittest will survive in a semantic world

Share this on social media:

Topic tags: 

The survival of traditional publishers in a rapidly-evolving environment will depend on their ability to identify what their customers most need, and keep delivering this, writes Melinda Kenneway of TBI Communications

Little did the poor dodo know, as the first Europeans dropped anchor off the shores of Mauritius, that its days were seriously numbered. It happily went about its business, wondering who these new island visitors were, only towards the end perhaps realising the real threat they posed. But by then it was too late.

The potential imminent demise of the STM industry has been talked about for the past decade and more, but still it hasn’t happened. Not yet anyway. However, the world today is not what it was 10 years ago. There have been many new initiatives in recent months that perhaps begin to signal the real likelihood of major and wide-scale changes within our industry. The emergence of new tools like Mendeley for storing and sharing articles; SciVee, a video sharing site for scientists; and services like FriendFeed, Ripple6 and Wordpress make easy work of collaboration and self-publication. All these things create a very different world today. So, clearly this is not a time for complacency for established publishers.

The lesson that evolution teaches us is that it is not enough just to be good at what we do currently. We have to be alert to what’s changing in our environment, and agile enough to adapt quickly to the new landscape.

This ability to adapt is critical, and yet surprisingly difficult to do. Established businesses are often at a disadvantage when it comes to being truly innovative. As Harvard business professor and author Clayton Christensen highlights in his book ‘The Innovator’s Dilemma’, traditional industries tend to be wiped out by disruptive technologies. That is, innovations that improve products or services, but which come from a direction that the market doesn’t expect. New organisations with new ideas – and without the handicap of existing business models and revenue streams to protect – can creep up overnight. They can eradicate whole industries, not because those industries are particularly stupid, or not forward-looking enough, but because they can’t change sufficiently or quickly enough to be able to truly compete with a radical new approach. They’ve simply got too much invested in the status quo.

This is the dilemma for the STM industry. We are inevitably tied to our past. We know what has worked before and it remains reasonably profitable for us. The question is how much longer will this remain the case, and are we prepared for the potential speed of decline that may well be coming our way if we can’t adapt sufficiently to the new environment we find ourselves in? The changes that are most likely to finish us off will happen quickly, and probably from an unexpected direction. So we need to remain alert to that, and equip ourselves to be able to adapt rapidly and radically.

An evolving process

Trying to predict what the STM publishing industry might look like in 10, or even just five years time, is difficult to do. However, in an age when just about anyone can publish, our role of selecting and making content available is likely to quickly become obsolete. Perhaps instead we should increasingly be focusing our efforts on helping researchers navigate through the mass of content available, using our brands to signal quality, and developing tools that personalise and make sense of information.

Researchers have a clear and growing problem. The ability of the human brain to keep up with newly-published information is limited. We can only find and digest so much in a day. And yet the amount of relevant information to any one individual being published is growing at an exponential rate. This is the problem perhaps that publishers should concentrate on helping solve, and this is also the promise of the semantic web – using technology to help draw out new knowledge from content, making it relevant to the interests of each individual, and helping them share and build on this knowledge.

Many publishers are already pushing forward in this direction, standing back and looking at the whole process of scholarly communications and re-imaging how they might fit into these evolving processes. The obvious starting point for most seems to be to attempt a replication of the success of social networking sites such as Facebook and LinkedIn. There has been a rush from publishers over the last couple of years to launch similar services for the scholarly community. Examples include doc2doc from BMJ Group, Nature Networks from Nature Publishing Group and UniPHY from the American Institute of Physics.

The Breathing Space project aims to discover how semantic and networking technologies are influencing the usefulness of research content

There are mixed views on the likely uptake of these kinds of services. Some of the larger services have up to 200,000 registered users but we still seem a long way from providing the ‘killer app’ that will bring the productivity benefits that are likely to be required to attract researchers in their millions. So why isn’t the academic community biting our hand off for these services? Perhaps it’s a question of time and market maturity. Perhaps though it’s something more fundamental to do with the nature of scholarly communications.

What do researchers really need?

If we look at the way researchers work, we begin to see problems with the idea of collaboration. Networking is generally a periodic requirement, not a constant one. Some fields are fiercely competitive, for example, gene therapy where researchers are looking to secure patents. For many researchers also, sharing their early thought processes in public would not necessarily be a good or useful thing. Saying that, different communities may respond in different ways. There are, for example, some amazing examples of wide-scale collaboration in the areas of computer science and mathematics. In these disciplines, collaboration is often required to analyse large datasets and the ‘workings out’ part is in itself a valuable output. So, as publishers, the ‘one size fits all’ model has certainly had its day. We need to offer services that have major productivity benefits for the academic community if we are to tempt them away from the ‘good enough’ tools of today and adjust what we offer to the precise needs of individual communities.

There is a problem for publishers in knowing where to invest efforts when it comes to creating a framework for communities through which scientific communication can take place. There is very little information available to explain how researchers are currently using these networks, and the value they derive from them. We need to do more research and analysis here, to really understand what researchers need. This was the driver behind an initiative that we have instigated at TBI Communications, to gain some real data on how semantic technologies and community collaboration tools are used by researchers, and the impact this has on the usage of content. Our experiment is called Project Breathing Space.

Evaluating usefulness

During 2009 TBI issued a press release to find a society partner to collaborate with us. We wanted to run an experiment to measure the real impact of semantic and networking technologies on the usefulness of research content. We had a huge amount of interest in our project and eventually selected a combined application from the European Respiratory Society (ERS) and the American Thoracic Society (ATS).

We chose to work with the ERS and ATS because together their publications represent some 30 per cent of their community’s cited output. This gives a critical mass of content – an important feature of making semantic technologies and community tools useful for a particular discipline. They also together have a large and easily-reachable researcher base of more than 24,000 members. We further partnered with Publishing Technology to provide the technology to support the functionality that we wanted to introduce.

Through this project we have taken back files of articles from across four journals published by the ERS and ATS. This content has then been data minded for connections – looking at authors, topics, institutional affiliations, concepts and citation relationships. An interface was then developed that overlays this data, enabling users of the site to explore the content in a much more dynamic manner.

This trial site is currently being tested by the respiratory medicine community. We are running an accompanying research programme to assess researchers’ needs and expectations and have begun to gather some real data on the impact on usage and customer satisfaction that these technologies have. The data from this experiment will be published in the summer. We hope this will stimulate more discussion and experimentation about how publishers can best serve the research community with such value-added services.

These kinds of services may or may not ensure publishers’ place in the world for the future. It’s unlikely that any one publisher will be successful in owning the entire research community. Instead each publisher has to find their niche and then ensure that the services they provide are completely interoperable with the many others that are likely to be developed and become successful. We will face continued intensive competition for the loyalty of each community and may find that we rise and fall in their favour much more quickly than we have done in the past. We will need to be agile and keep adapting if we are to stand the best chance of long-term survival.

We have the opportunity to transform our role from one of a passive provider of content to one that inspires new ideas and helps accelerate knowledge. To do this we need to add value to information, and seamlessly integrate it into the workflow of researchers. We should become more imaginative with the tools we develop to help identify connections and themes across our content. Visualisation techniques can help expand beyond the traditional boundaries of our printed past. By developing clever ‘intelligence agents’ (in the words of Tim Berners-Lee) we can help individual researchers identify and gain knowledge from the huge mass of content that continues to grow at an exponential rate. Analysis of that information provides us with the opportunity to layer our brands selectively on top. This should ensure that we continue to be associated not just with content, but with high-quality content and with helping researchers derive much greater value from that content.

Melinda Kenneway is director of TBI Communications