Why survey AI in academic publishing?

Share this on social media:

Michael Upshall is head of sales and business development at Unsilo

Michael Upshall reports on an ongoing survey focusing on artificial intelligence in academic publishing

With all the talk of AI in the media, you would think that this technology is by now ubiquitous across all aspects of our lives.

Academics would be among the first to make use of this new capability. The reality for academic publishing may be very different – but how different? How much do we know about AI usage in the academic workflow?

Is it now part of the everyday operational reality, or reserved for small, blue-sky projects run by a research team somewhere in the bowels of the organisation? Is this technology now accepted as a supplement to, or a replacement for human decision-making? Can AI reduce the time to publication for articles, without reducing quality?

Questions such as these are of interest to everyone in the academic community, but answers have not been easy to obtain, except on an anecdotal basis. That is the reason for a survey we are running for publishers, authors and institutions, to get a clearer picture of AI use in academic publishing.

The survey (closing date 30 September 2019) has two goals: to measure take-up of AI, and attitudes towards it in the academic publishing workflow. Through the survey, which is anonymous, we aim to indicate some demographics and suggest some implications. At this year’s Frankfurt Book Fair, in the academic publishing theatre in hall 4.2 (3 to 5pm), UNSILO will host two panel sessions, chaired by David Worlock; the panellists will be representatives from across the publishing spectrum.

Among the questions the survey raises are:
• In which areas of the publishing workflow is AI currently being used?
• What factors are preventing the adoption of AI?
• Which types of AI are in use?
• Where does the knowledge resides to manage the AI - in-house or externally?
• Are users of AI checking for bias and ethical issues? and
• Why the need for a survey?

Perhaps not surprisingly, many new to the use of AI in publishing – for example, to find peer reviewers automatically – are suspicious that a machine can carry out a task that takes considerable time for humans to do well. There is a fear of loss of control, that human decision-making is being bypassed.

This is a perfectly understandable concern, although we often do not notice the extent to which AI is already being used in daily life around us. Every time we complete a form by hand, filling in one character or number in each box, there is an algorithm at the other end that is converting those hand-drawn characters into machine-readable characters. Even though the machines that carry out this task are not perfect (some ambiguity will always remain), they have a much higher accuracy at interpreting handwriting than humans, so there is no need for humans to be involved, except in cases where the machine cannot determine what is meant.

Many AI tools are based on a human-curated training set, and so the machine can never exceed the level of accuracy set by humans. Nonetheless, there are AI tools today that do not use a human training set, and so can achieve higher matching scores than humans. But it is vital to remember the human role in the process. When it comes to tasks like funding a peer reviewer, a good AI tool will suggest names, but leave the final decision to the human editor. but it is the editor’s task to make a decision about which of these names to contact. In other words, an effective AI tool is used to facilitate, rather than replace human decision-making.

Another challenge is to measure the success or otherwise of AI tools. If you can’t measure it, there is no point in implementing it, of course, but it is not simple to measure the extent to which an AI-based process improves the quality of service, for example, in identifying related articles on a publisher’s website. For academic content collections, a reduction in the time a user spends after AI-based links are provided, might indicate that users are more satisfied, because they find the article they were looking for in less time, and then go offline to read it.

Bias did not begin with machine learning, but the use of AI tools has uncomfortably revealed the existence of bias in much thinking around academic publishing. For example, it is well known that a comparison of publications and peer reviews indicates that Western researchers currently carry out more peer reviews than their counterparts in Asia. We can only speculate why this should be, but one argument suggested is that the problem of disambiguating similar names is particularly acute with Asian names. Potentially, an AI tool that can disambiguate different researchers with the same name would be a great benefit to publishers, as it would reduce, if not eliminate, this particular challenge.

We welcome your participation in the survey and we look forward to reporting the conclusions in October.