Thanks for visiting Research Information.

You're trying to access an editorial feature that is only available to logged in, registered users of Research Information. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Web evolution changes information access

Share this on social media:

Topic tags: 

The changes that the internet has brought to libraries have been dramatic. Now, the web itself is changing. Sian Harris reports back from Online Information about the implications of this for libraries and for the information they provide

‘Libraries can’t manage knowledge but they can manage the context that users access it,’ said Stephen Abram, vice-president of innovation of Sirsi Dynix and chief strategist of the Sirsi Dynix Institute. He was speaking at the Online Information conference held recently in London, where one of the biggest topics was the so-called Web 2.0 phenomenon.

Over the past year or so, ideas that were initially the preserve of techy web personnel have crept into mainstream culture. Social networking via websites such as Facebook and MySpace is now a common way for many people to keep in touch with friends, make new friends and share interests. Blogs, where people post news and comments for others to read, and wikis, where people collectively build up resources such as the well-known, collaborative online encyclopaedia, Wikipedia are also very popular Web 2.0 ideas.

Such technologies seem far removed from the traditional idea of libraries, especially research libraries, where people go quietly to the journal article or book they want, find the information they need and then leave again, quietly. In this traditional stereotype the only interactions going on might be to ask the librarian for help in finding a resource. What’s more, moving the information online seemed initially to have reduced interactions between people still further by enabling users to steer clear of the physical library building entirely.

Such were the concerns about the perceived role of libraries a few years ago but all this could be changing with the advent of a new buzzword: Library 2.0. As Abram pointed out to delegates, ‘Relationships trump content every time.’ He went on to explain the benefits of libraries aligning themselves with readers to create an effective user experience. And libraries are starting to turn to Web 2.0 ideas, such as social networking, to do just that. ‘Web 2.0 helps us to solve real questions of life,’ Abram pointed out. ‘We are in a period of extreme ambiguity. If we don’t participate in inventing this it will happen to us anyway. We should support the expeditionary librarians rather than criticise them when they experiment with new ways of working such as using [the virtual-reality environment] Second Life.’

For Abram, though, it is important to focus on the big picture, rather than getting bogged down in listing what technology is involved in any website. Indeed, it believes that it is important to make integration of new technology as seamless as possible for the user. And standards will play a vital role in this. ‘If content isn’t in XML then it can’t be viewed on a portable device. It has to be format-agnostic,’ he said. ‘And when are we going to stop building 400 repositories in 450 different standards?’ Using standards makes it easier for library content to be integrated with people’s own space, which is one of the keys to personalising users’ experiences.

One example where such personalisation is already possible is in the public libraries of six municipalities in Sweden. ‘The web is the information hub of the library,’ explained Lars Eriksson, project manager and technical adviser for Sweden’s Minabibliotek.se project. This new website, which was demonstrated to delegates at Online Information, has integrated the library catalogues across all its member libraries with the library’s website. The site contains information about library arrangements, tips about books and their rankings, and where books can be borrowed.

But the integration goes further than this: it also allows users to tag books, share their tags, discuss books and form a virtual community. Users can put pictures on their own pages and make lists. They can also make friends and connect to them and join groups such as reading groups.

‘We’re created a Facebook for the library,’ said Eriksson. ‘I’m very excited about this community and what we can do with it.’ Despite the presence of other wellestablished social-networking sites and lack of expectation for such a service from users, he believes that library sites do have an important role to play in providing such interactions. After all, he pointed out, Facebook connections are usually with people you already know in your community and the library is already located within its community so has a geographical advantage in building such links.

Such a big project does require significant resources, however. Minabibliotek.se has funding from the EU and was two years in development. It employs four people in the web administration group and all the library staff must be involved in the project for at least one hour per week. Eriksson pointed out that libraries wishing to emulate this project do not have to do everything at once though.

‘Take it one step at a time. Build a blog or wiki and see how it goes,’ he advised.

User behaviour

Conference speakers also had other insight to offer librarians in developing their services. Carol Tenopir, who is director of research at the University of Tennessee in the USA, reported some trends that she and colleagues observed in recent research into user behaviour and how it differs depending on the age or career stage of the users. For example, the research revealed that faculty members over the age of 40 are more likely to read from the library and that older researchers look at fewer older articles, perhaps because they already read them when they were first published. The team also found that younger people do more searching and less browsing, while academic libraries users under the age of 30 are much less likely to chose print publications rather than electronic access. However, ‘everyone prints out a lot of paper to read. We’re still not saving trees,’ she revealed.

David Nicholas, director of SLAIS at University College London, UK, also gave an insight into user behaviour based on his team’s deeplog analysis, where all the web traffic data for a period of a year or more is collected automatically and analysed. One of the latest websites to get this treatment was OhioLINK, which is the website for the electronic resources of four universities in Ohio, USA.

One of the most alarming revelations from this research was that 32 per cent of library web sessions were less than three minutes in length. ‘It is not much more than power browsing. I suspect that many of the downloads are never read,’ he commented.

Such observations mean that those libraries aiming to build research and learning communities based around their websites have to make sure they get to know their users and how to show them clearly and quickly what is available and what it means for them, before they move onto their next activity.

Publisher 2.0

The potential benefits of Web 2.0 ideas for research are not simply being discussed by libraries. This topic was equally being discussed by scholarly publishers at Online Information.

Jeff Lash, product director at Elsevier, gave an example of a professor with a PhD in mathematics whose blog on evolution theory is linked to a thousand times more than his papers on mathematics are cited. The reason for this, he suggested, was that this professor is in charge of his own blog on evolution theory, whereas publication of and access to his peer-reviewed articles is controlled by the publishers. ‘With linking on the web it is not about a small group of people deciding what gets published. Instead, everybody decides,’ Lash explained. In addition, anything published on a blog can be accessed straight away while the traditional publishing process typically takes about a year from submission – and longer from when the research was carried out.

Lash wasn’t, of course, arguing that blogs should replace traditional scholarly publishers such as his employer. However, he did suggest some ways that publishers could learn from new Web 2.0 models.

For example, the social news-gathering service Digg has inspired several STM clones such as Macmillan and Nature’s Dissect Medicine. He also pointed out that information is becoming a commodity. ‘It is no longer valued for quantity,’ he explained. ‘The value is really around insight, aggregation and access. Aggregation can make information, that was otherwise useless, valuable, which saves time, energy and research.’

This is one of the reasons, believes Lash, why blogs have become so popular: it’s somebody else doing the work, an idea known as crowd sourcing. One of the most popular blogs for medicine is called Kevin, M.D., he pointed out. However, this blog does not contain any original content but simply aggregates other relevant information. And tools such as Yahoo! Pipes are now enabling users to do this for themselves.

‘We are moving from a traditional publishercentred model to a user-created process. The challenge is how to marry these two models,’ he explained. Some of the ways Elsevier is experimenting with this include 2collab, which connects people with other people in the same area of research, and WiserWiki, which Lash described as somewhere in between a medical textbook and Wikipedia. ‘Everyone can edit the text freely, provided they are a registered physician,’ he explained.

Web 2.0 was one of the biggest topics at the Online Information event