AI, the new frontier – opportunities and challenges

Share this on social media:

Artificial intelligence is the future for all industries – especially scholarly publishing, writes Darrell Gunter

Artificial intelligence (AI) is currently all the rage in our global economy. The launch of ChatGPT broke all of the records for user adoption – Reuters reported that ChatGPT achieved 100 million users in two months.

The AI boom has created a demand for talent, products, services, and so on, that promises a better society. However, we are also experiencing bad actors taking advantage of the situation for personal gain. Unfortunately, we have experienced bad actors throughout our history, and collectively, we must diligently fight against these bad actors.

Within scholarly publishing, we have ushered in the internet, digital journals, and books, and now we are witnessing first-hand the benefits of AI, semantic search, IoT, and WEB3. This article aims to provide a context of the history of AI, the opportunities, challenges, new services, and governance.

The history of AI

The term artificial intelligence was first coined by John McCarthy in 1956 when he held the first academic conference. But the journey to understand if machines can honestly think began before that. In Vannevar Bush's seminal work, As We May Think, he proposed a system that amplifies people's own knowledge and understanding.

"As We May Think" is a 1945 essay by Vannevar Bush that has been described as visionary and influential, anticipating many aspects of information society. It was first published in The Atlantic in July 1945 and republished in an abridged version in September 1945 – before and after the atomic bombings of Hiroshima and Nagasaki. Bush expresses his concern for the direction of scientific efforts toward destruction rather than understanding and explicates a desire for a sort of collective memory machine with his concept of the memex that would make knowledge more accessible, believing that it would help fix these problems. Through this machine, Bush hoped to transform an information explosion into a knowledge explosion.

Five years later, Alan Turing wrote a paper on the notion of machines being able to simulate human beings and the ability to do intelligent things, such as play chess.

Artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines in contrast to the natural intelligence displayed by humans and other animals. In computer science, AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximise its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as learning and problem-solving.

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring "intelligence" are often removed from the definition, a phenomenon known as the AI effect, leading to the popular quip: "AI is whatever hasn't been done yet." For instance, optical character recognition is frequently excluded from "artificial intelligence", having become a routine technology. Modern machine capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go), autonomously operating cars, and intelligent routing in content delivery networks and military simulations.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter") followed by new approaches, success and renewed funding. For most of its history, AI research has been divided into subfields that often fail to communicate with each other. These sub-fields are based on technical considerations, such as particular goals (for example, "robotics" or "machine learning"), the use of particular tools ("logic" or artificial neural networks), or deep philosophical differences. Subfields have also been based on social factors (particular institutions or the work of particular researchers).

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and the ability to move and manipulate objects. General intelligence is among the field's long-term goals. Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimisation, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many others.

The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it". This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity. Some people also consider AI to be a danger to humanity if it progresses unabated. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.

In the 21st century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding, and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.

AI opportunities

We have witnessed, and are witnessing, new AI applications every day. Detailed below are 10 applications that are being used today.

  • Automated customer support - provides answers to fundamental questions, prioritizes incoming calls, aggregates data, provides insights, and so on;
  • Personalised shopping experience - provides you with ideas, products, and services that fit your profile and needs;
  • Healthcare - analysis of data and connecting content in context and adjacent information;
  • Finance - the implementation of machine learning, algorithmic trading, adaptive intelligence, chatbots, automation , and so on, into an array of processes;
  • Smart cars and drones - self-driving cars and drone package deliveries are being tested today;
  • Travel and navigation - AI helps you to avoid accidents and other barriers, and also provides you with an accurate arrival time;
  • Social Media - your profiles on LinkedIn, Facebook, Instagram, X, TikTok, and so on, are determining what you are seeing in your feed;
  • Smart home devices - leveraging smart assistants, AI is able to manage the lighting, temperature, and security in your home;
  • Creative Arts - Watson provides cooks and chefs with ideas for new recipes and wine pairings; and
  • Security and surveillance -technologies such as facial recognition and voice recognition are getting better with each passing day.

AI challenges, risks, and bad actors

The promise and opportunities provided by AI are many, but as we have already experienced, the bad actors have also pounced on the opportunity to take advantage of the global community.

What are the challenges that we face and must address to ensure that AI is being used in the most productive manner in our society? I am sure the list is long; however, I will start with what I feel is the most urgent.

  • Governance - Our global policymakers must urgently establish ground rules and laws to manage the applications, ethics and rules for AI;
  • Bridging information gaps in the current algorithms to ensure the AI code includes the complete information; and
  • Establishing the right questions for the algorithms to answer.

The risks of incomplete or biased AI are many. The NY Times recently reported, for example, that an eight-month pregnant woman was charged for a crime she did not commit, due to faulty facial recognition.

The 2019 Brookings article on AI bias provided four key challenges to managing bias in AI.

  1. Bias built into data;
  2. AI-induced bias;
  3. Teaching AI human rules; and
  4. Evaluating cases of suspected AI bias

Each item presents a unique challenge for policymakers, academics, NGOs, and the public. The Algorithmic Justice League is one of the leading advocates working to eliminate AI bias.

There are potential solutions, however: bias auditing tools typically rely on a combination of several methods to detect and analyse bias in AI systems. These methods can include fairness metrics, counterfactual analysis, sensitivity analysis, algorithmic transparency, and adversarial testing.

Developments in scholarly publishing

The scholarly publishing industry has historically moved slowly to adopt new technology. As chronicled in my Against The Grain article titled, As Worlds Collide - New Trends and Disruptive Technologies, the article addresses the 2001 PSP symposium titled The Ebook: Crouching Dragon or Hidden Tiger? Publishers and librarians actually debated the pros and cons of the e-book. In my opinion, this panel delayed the digital ebook transformation.

Even though companies like Collexis and Parity Computing (both acquired by Elsevier) introduced semantic search (AI) back in 2007, we have not seen the publishing industry as a whole to adopt semantic search. In my upcoming talk titled Semantic Search at the Internet Librarian Conference, I will provide the audience with the current best practices and opportunities.

The current landscape of publishers that have active AI products/services applications include:

  • Clarivate describes several AI applications in their products and services;
  • Digital Science announced a limited and exclusive beta launch of Dimensions AI Assistant, a new research tool designed to enhance how users engage with the wealth of knowledge available on Dimensions, among the world's largest linked research databases;
  • Edanz Journal Selector, launched in 2012, has been helping researchers to select the best journal for their manuscript utilising semantic tools;
  • Elsevier released an alpha version of Scopus AI for researcher testing – a next-generation tool that combines generative AI with Scopus' trusted content and data to help researchers get deeper insights faster, support collaboration and societal impact of research;
  • Underline Science, Inc. has launched its Digital Video Library (35,000+ videos), and its new AI Hub contains 10,700+ AI conference lecture videos; and 
  • Wolters Kluwer recently acquired Della AI, a provider of leading artificial intelligence (AI) technology based on advanced natural language processing (NLP). This technology allows legal professionals to review contracts in multiple languages by simply asking questions.

In my edited volume titled Transforming Scholarly Research with Blockchain Technologies and AI, chapters 6, 8, and 10 provide several potential uses of AI in scholarly publishing.

AI hardware and intellectual capital

Currently, ChatGPT is all the rage, but if you want to maintain control and privacy of your intellectual capital, you should not use ChatGPT. Recently some Samsung engineers used ChatGPT to assist them in writing some new code, but they made the mistake of entering their proprietary code into ChatGPT, thus making their proprietary code available to the world.

This incident highlights the need for companies to have proprietary AI tools requiring high-performing PCs (workstations). These workstations require high-performing chips by NVIDIA. The demand for these high-performing chips is very high as every industry will need to have their micro internal AI applications powered by high-performing quality workstations. MAINGEAR, a manufacturer of high-performing workstations and partner of NVIDIA, is well known as one of the leaders in building high-performing PC workstations. This recent Techrader review rated them the best PC for gamers that require high-performing PCs.

Companies from all industries will require these high-performing PCs to power their internal AI applications.

The crystal ball

My crystal ball shows the following:

  1. The AI wave is real and gaining momentum every day;
  2. Publishers who do not adopt AI in their planning will do so at their peril;
  3. The governance of AI is crucial, and all stakeholders must be active participants;
  4. Partnership opportunities will provide the publishing community to focus on what they do best – publish the world's best research.

Summary

AI began its journey in 1958 and has grown immensely over time. Still, with the advanced knowledge management learning and the growth of the PC micro processing power, we can expect advances to come fast and furious. However, we must be mindful of the bad actors taking advantage of the situation and meet them with our counter-offensives.

The publishing industry must embrace AI and provide the necessary investment to build the most robust, innovative, thought-provoking AI tools. In the 1980s, Telerate provided financial information for the fixed-income and foreign exchange markets and had 100% of the market. Bloomberg launched in 1985 to compete with Telerate. By 1998 Bloomberg had overtaken Telerate, and Telerate was out of business a few years later.

The moral of this story is that if any publisher does not invest and innovate with AI they could be the next Telerate. AI is the future for all industries – and especially the scholarly publishing industry.

Darrell Gunter is CEO of Gunter Media Group, and the author of Transforming Scholarly Research with Blockchain Technologies and AI.