Thanks for visiting Research Information.

You're trying to access an editorial feature that is only available to logged in, registered users of Research Information. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Reinventing journal publishing

Share this on social media:

The internet has already shaken up the traditional way that information is found and viewed but far more radical changes are possible with the way that research is published. John Smith, a librarian at the UK's University of Kent, argues the case for a completely new model

A few years ago I proposed a new model for academic journal publishing, which I called the Deconstructed Journal (DJ) model. The DJ model proposes that the traditional centralised academic journal publishing model, based around the publisher, could be replaced with a decentralised model using cooperating, independent agents carrying out the necessary activities without the need for a central publisher. Since then, developments in academic publishing, and information dissemination in general, have moved this idea from possible to probable and, maybe, even to inevitable.

The DJ model is based on the realisation that the traditional academic journal is not an end in itself but only a means to an end, and that there can be other ways of achieving this end. In the case of the academic journal, this end is the provision of a range of activities that support academic communication and the research process.

There are five main activities that are provided by journals. The first is ensuring the quality of the content of an article through the refereeing process. Quality control, in terms of improving the readability and appearance of a published article, is also important. The third role that a traditional journal plays is in recognising the value of the work through the selection of the article for publication. Making the article available to others is the key idea of publishing, while the final task is ensuring that those who might be interested are aware of the existence of a new item. Publishers make people aware of new material through various forms of marketing.

Currently these five tasks are undertaken and coordinated by publishers, but they do not have to be. Table 1 (below) summarises who carries out these activities in the traditional journal model and how they could be done in the proposed DJ model.

The DJ model is not a replacement for the individual journal, but a replacement for the publishing model within which traditional journals operate. In order for the DJ model to exist, there need only be agents carrying out these necessary activities and co-operating in the ways required by the model. There cannot be a 'Deconstructed Journal', although there can be overlay or virtual journals and individual articles within repositories or open-access journals.

Comparing the traditional and DJ models
Another way of comparing the traditional and the DJ models is to consider the processes an article passes through in each model, represented by Diagrams 1 and 2 (below).

  • Diagram 1 shows the traditional model, where the publisher is central to all activities. Diagram 2 illustrates how the DJ model provides a number of alternative paths by which the five publication stages are carried out. In both diagrams, the arrows indicate information or activity flows (or money where tagged with a £ sign). The circles containing QC indicate a quality control activity and QCA indicates a quality control of appearance activity. The dotted lines in the second diagram indicate alternative paths

 

The traditional model is quite simple, as might be expected for something that has been honed over three centuries. The author submits the article to the journal, it is judged for subject appropriateness and initial quality. If it passes these tests it is passed to referees, and feedback from the referees is passed back to the author until the required content quality is achieved. The article is then sub-edited for appearance and finally is published in a particular issue. The reader gains access to the issue after paying (or the reader's institution paying) a subscription. This payment covers the cost of the publication phase and the organisation of the refereeing (quality control) phase (but not the actual cost of the referees' time, which is given free). The reader is guided to the issue via indexes or references from other articles, etc. The publisher also markets the journal to the community it relates to.

The DJ model is more complex but more flexible. The author places the article in a repository (institutional or subject) and informs the certification agency (CA) that it is available (or emails the article direct to the CA). The CA makes the same subject and initial quality assessments as the editor/editorial board of the journal in the traditional model, and passes a copy onto the referees - or just sends them a pointer to the article in the repository (in the same way that conference submissions are often refereed today). The referees operate as in the traditional model, with a feedback loop to the author to improve the content quality. The author (or the author's institution) may also carry out sub-editing during this stage to improve appearance.

Once the referees are satisfied, the CA arranges sub-editing for appearance if this has not already been done and then affixes a Seal of Approval (SoA), which is a digital signature or similar, to the final version. This is then sent back to the author or deposited in the repository. This SoA guarantees that the article is the one approved by the CA.

The CA does not publish the article and is not responsible for making it available to the reader. The author (or, more likely, the author's institution) pays for this quality control activity. A criticism raised against the 'author pays' model is that it is similar to vanity publishing but, in this case, the CA makes nothing from the publishing aspect. If its SoA can be bought, then it has no value.

The repository is open-access, and freely available to the reader. The reader may be led to the article (there is no true issue or journal in this case) using indexing services (which are probably free and based on metadata automatically harvested via the Open Access Initiative (OAI) protocol), or from other article references, etc.

Since we no longer have a journal, the subject filtering activity is carried out by virtual, or overlay, journals. The overlay journal saves the reader effort in locating relevant articles and so would be worth a small subscription. However, unlike the traditional journal, the overlay journal does not block access to the article if the reader does not subscribe, as it is only one of many paths to the article. Many overlay journals could point to the same article if its contents were relevant to many research areas. One of the many advantages of the DJ model is that articles are not hidden in a single-subject journal. In addition to the OAI-based services, the reader could consult general or specialised search engines, or subject directories like those provided by the Resource Discovery Network, or general directories like the Open Directory Project.

Complexity is not necessarily bad
A common response to a direct comparison of the traditional and DJ models is that the DJ model is much more complex and this must be a 'bad thing'. A simple proof that this is not necessarily the case is to compare the way in which the car support industry operates now, versus the way it was a century ago (see Table 2 below).

Keeping a car in 1904 was much simpler than today, because you could get whatever you needed by going to the local garage. This was simple and convenient. However, it was also much more expensive and choices were very limited. Complexity has considerably reduced cost and increased choice. No one would suggest we should return to the 1904 model.

The DJ model is superior to the traditional model in many ways, and certainly more flexible. However, the traditional model works within its limitations, and certainly suits the existing publishers, so why should the academic publishing industry change? It will change because it has no choice. The existence of the internet has already revolutionised academic peer-to-peer communication and information dissemination. More ideas for communication and dissemination are continuously emerging in this new world and many of these could form part of the DJ model and/or be supportive to its mode of operation.

Elements of the DJ already exist
Many elements of the new model are already in place. Firstly, there are freely-accessible article collections. The DJ model needs collections of articles for the overlay journals to point to. These already exist in the form of subject repositories, institutional repositories and open-access journals, and powerful forces are encouraging their growth. The recent report from the UK's House of Commons Science and Technology Committee was strongly in favour of open-access publishing and institutional repositories. Also, many research funding agencies are encouraging the placing of research articles and reports in open-access journals and/or repositories.

Finding or marketing services are also emerging. The DJ model requires services that enable readers to find relevant articles. All the new repositories are OAI compliant (and the older ones are becoming so), making the metadata describing their contents available for the metadata harvesters. This is then used to form repository indexing services like OAIster.

In addition, overlay or virtual journals already exist. For example the Virtual Journal of Applications of Superconductivity is one of a series published by the American Institute of Physics and the American Physical Society. Hybrid forms of publishing are also emerging such as LiteratureCompass from Blackwell Publishing. This contains only review articles and links to research articles in mainstream journals. The external links could easily point to open-access articles and repositories. Other 'Compass services are planned. Subject-based weblogs (blogs) have a great deal in common with virtual journals and the software written to produce them could easily be used to produce quite advanced virtual journals at little cost. Small learned societies are ideally placed to be providers of overlay journal services. Google, and the other search engine services, index the full text of anything they can find on the net and Google Scholar uses similar techniques as well as citation linking to identify scholarly material.

Elements of the DJ still required
The only major element not yet available is the independent CA. BioMed Central is very close since it provides the quality control feature and then makes its research articles open access. If it did not maintain its own servers (it also places copies in PubMedCentral) but gave the certified articles back to the authors for publishing, it would be a pure CA. Ironically, the main contenders to be CAs are the existing publishers since they already have the mechanisms in place to provide certification services. They would need to stop requiring copyright in exchange for publication and move to the 'author pays' model, but they would not have the overhead of running their own servers. The downside would be that their overall turnover, and therefore final profits, would suffer. Learned societies are also well placed to become CAs, since they already have the subject knowledge and academic reputation required. The start-up and operating costs are low, since the CA does not need to maintain online servers and archives.

The traditional journal model is a child of the paper world, and requires a centralised hierarchical management structure. It cannot promote or market an article or issue alone; it only makes sense to market a journal. The DJ model, on the other hand, is a true child of the net and could not exist in a less well-connected world. It works at the article level and all its structures are based around the article. Its distributed structure matches the underlying model of the net and it can evolve with the net provided that certain basics are observed. These are: quality control of content (and a mechanism to pay for this), free availability of the article, and filtering/finding services.

Further information

House of Commons Science and Technology Committee, 2004. Scientific Publications: Free for all? Tenth Report of Session 2003-04 (HC 3991)
www.publications.parliament.uk/pa/cm200304/cmselect/cmsctech/399/399.pdf

Smith, J W T, 1999, The Deconstructed Journal, a new model for Academic Publishing, Learned Publishing 12 (2), pp 79-91
http://library.kent.ac.uk/library/papers/jwts/DJpaper.pdf

Smith, J W T. 2003, The Deconstructed Journal Revisited - a review of developments. ICCC/IFIP Conference on Electronic Publishing - ElPub03 - From Information to Knowledge, Universidade Do Minho, Guimar�es, Portugal, 25-28 June 2003, pp 2-88.
http://library.kent.ac.uk/library/papers/jwts/d-jrevisited.htm

Smith, J W T. 2004. The Deconstructed (or Distributed) Journal - an emerging model? Online Information 2004 Conference, Olympia Grand Hall, London. 30 November-2 December, 2004
http://library.kent.ac.uk/library/papers/jwts/DorDJEM.pdf

BioMed Central www.biomedcentral.com
Google Scholar scholar.google.com
LiteratureCompass (Blackwell Publishing) www.literature-compass.com
OAIster www.oaister.org
Open Archives Initiative (OAI) www.openarchives.org
Open Directory Project dmoz.org
Resource Discovery Network (RDN) www.rdn.ac.uk
Virtual Journal of Applications of Superconductivity vjsuper.org