The evolution of an industry
Roger Valade traces developments in scholarly communications over the last couple of decades, from CD-ROM to artificial intelligence
Technology has been a fascination of mine long before it became a career.
On Christmas morning in 1984, I woke to find a life-changing present beneath the tree: a brand-new IBM PCjr computer with a glorious Epson dot-matrix printer (those crisply perforated pin-fed sheets!) and not one but two five-and-a-quarter-inch floppy drives. How my father revived me from my rhapsodic loss of consciousness remains a mystery. I think I wanted a Commodore 128, but he was certain an IBM would set me up with better skills for the business world and, as with so many other things, he was right. I don’t think I left that computer for a week and I’ve been technology-entranced ever since.
Almost a decade later, my first job was as a writer and editor at Gale Research in Detroit, a position I was very happy to land as a recent English graduate (physics, my other love, didn’t stick after we got to Schrodinger). We were using IBM PS/2s and I dived straight in, writing macros in WordPerfect to help automate the publishing process. That was that – it was probably the first time I realised technology could be a career and not just a hobby. I started working on the interactive CD-ROM products Gale was building and eventually our first online search products. Since then, I’ve worked in energy, automotive, consulting, publishing and in education, but I’m thrilled to be back in the information industry.
Now, as CTO at ProQuest, it’s remarkable to reflect on how the industry has evolved with technology – and how vendors and publishers can use new services to continue to meet the evolving needs of users. A lot has changed since I left. And a lot hasn’t.
2002: The end of an era
Sixteen years ago, when Research Information published its first issue, if you wanted to do serious research you had to go to the library. If you wanted to watch a movie, you had to go to the theatre or Blockbuster. But things were right on the cusp of a significant change.
Students and scholars still relied heavily on databases to access reliable information, and they trusted librarians to help them find it. Academic journals and texts were still the key sources of information for research – and, in most cases, they were incredibly difficult to access outside of an academic library or other university resources. Vendors and publishers supplied academics with the content they needed to support faculty and students; faculty and students came to the library to use it.
But as the internet age first crashed around like a toddler and then started getting its legs, vendors and publishers become less passive and more entrepreneurial in meeting users’ needs. Advances in technology no longer meant that libraries had to wait for users to come to them.
Google Scholar was just two years from its launch in beta; streaming video, despite its relatively poor quality and lack of relevance in scholarly research, existed (before YouTube!); and e-book platforms were a novelty. ‘The Frankfurt Book Fair has pulled itself into the 21st century… with the announcement of the nominations for its inaugural e-book awards,’ wrote a reporter for The Guardian in October of 2000.
Fast-forward to 2019 and our role has shifted dramatically. Reliable information is no longer available only at the library. Or on the web. Information is everywhere, as omnipresent as Starbucks and free WiFi, from smart devices to Siri to Alexa to your car. It’s wonderful. It’s transforming. It’s terrifying. At ProQuest, we recently surveyed 1,300 faculty and students to better understand their research habits. We saw that while peer-reviewed journals are still essential for research and teaching, the mix of content used by faculty and students has expanded to include primary sources, e-books, video, and much more. Information-seekers not only have too much information to sort through, but now they must access it in an array of formats.
As vendors and publishers, we continue to be providers of information – but now that virtually anyone can access information, we need to step further into the roles of curators and evaluators. Researchers are overwhelmed and our job is to make them less so; to give them the tools to write a simple search query, find what they need, and access it quickly.
What’s next with AI
Artificial intelligence has been cresting at the top of the hype curve lately. We’ve been hearing about AI computing advances in waves for decades and dreaming about it in science fiction for a few centuries. Now, we are seeing it come to reality much more tangibly, from auto-indexing and categorising text, to detecting the content of images, to optimising your commute home.
This increasingly sophisticated technology is already helping evolve the way we develop our platforms and curate our content. It’s allowed us to create interactive documents, translate video into multiple languages and digitise previously-unavailable primary sources – in formats that are searchable, easy to use, enriched with metadata and are compatible with various devices.
The practicality and availability of these tools has enabled us to be much more proactive. Instead of waiting for users to come to us, we’ve had to start going to them, and adapting to the ways they work. And with AI, we can be even more proactive in how we do so.
Nomenclature matters, and there will always be some scepticism about around highly-touted technologies with impactful promise. It doesn’t help that we use the term ‘artificial’ intelligence, in the same way that the ‘extreme’ in extreme programming probably wasn’t terribly helpful. Is ‘artificial’ useful in this context? I’ve never really liked the term.
I just finished Microsoft CEO Satya Nadella’s book, Hit Refresh, and I love his perspective: ‘Today we don’t think of aviation as ‘artificial flight’ – it is simply flight. In the same way, we shouldn’t think of technological intelligence as artificial, but rather as intelligence that serves to augment human capabilities and capacities.’
It’s a necessity to go beyond the capability of the human brain to enable the storage of and access to information. At ProQuest, we’re building new tools into our platforms that help us discover more about our users, their needs and behaviours, so we can allow them to sort through billions of sources to find the few that they need. But behind the curtain, we’re still humans – developers, librarians, scholars – who are creating better paths to research. AI enables us to automate certain processes with remarkable accuracy, while retaining certain, more complex components of our curatorial processes for our biological brains. At least for now.
The more something changes, the more it stays the same
It’s a cliché, but when we think about the companies who’ve survived the technological boom of the past two decades – Apple, Google, Microsoft – they all have one thing in common. Their technology has changed, but their mission hasn’t. Our mission remains the same as it did in 2002, and 1984, and years before that. In 1938, Eugene Power reimagined an industry by bringing microfilm technology to the researcher and dramatically simplifying scholars’ lives. Nearly a century later, we’re still here with the same mission: providing access to the information people need to make their own research breakthroughs – and we will continue on this mission with the support of ever-evolving technology. What an unartificially fantastic ride we are on.
Roger Valade is chief technology officer at ProQuest