Responsible AI in academic publishing

Harry Muncey

Harry Muncey explains why the responsible use of artificial intelligence is an innovation accelerant and a business asset

In an era defined by rapid technological change, organisations adopting responsible artificial intelligence (AI) practices aren’t just safeguarding ethical standards; they’re positioning for sustainable growth and a lasting competitive edge. The responsible use of associated technologies is crucial for both innovation and fostering trust among customers and society at large.

Responsible AI is a strategic asset—a force for meaningful progress in science, research, and beyond. Behind every successful AI deployment is a team of data scientists, technologists, subject matter experts, software engineers, and user experience designers committed to building ethical, reliable, and transparent systems. This people-centred approach to AI fosters trust and accelerates continuous innovation across the AI lifecycle to ensure better outcomes for organisations and the communities they serve.

Responsibility as the competitive edge

A responsible AI framework isn’t just a moral imperative; it’s a strategic necessity. As companies rush to implement the latest AI advancements, they need mature, carefully constructed strategies that prioritise long-term stability over short-term gains. Responsible AI is a structured approach that mitigates risks, builds confidence, and fosters faster innovation – granting a competitive edge.

Research supports this: MIT Sloan Management Review findings from 2022 found that organisations with responsible AI practices are three times as likely to see benefits in talent recruitment and retention, customer retention, and accelerated innovation. 2023 data from Bain & Company revealed organisations with an effective responsible AI approach can double profit. 

Responsible AI development promotes trust and transparency, creating a self-sustaining cycle of innovation. Employees are more engaged in environments aligned with their values, which enhances creativity and fuels further innovation. The result is a feedback loop where responsible AI practices drive both business success and employee satisfaction, and position organisations to lead innovation.

Responsible AI as a socio-technical challenge

The responsible use of AI is a socio-technical issue requiring human oversight as well as technical solutions to address inherent biases, ethical considerations, and transparency challenges. Although technology can correct certain biases in data and AI-generated content, human judgment remains indispensable in assessing, refining, communicating, and applying AI safely.

This manifests through several key practices:

  • Addressing unconscious bias in decision-making with an inclusive approach to data collection, model training, and outcome evaluation.
  • Involving subject matter experts throughout the AI lifecycle to ensure that technical decisions align with real-world applications and ethical standards.
  • Providing transparency and explainability, giving users a line-of-sight into decision making and information sources within AI tools. 
  • Communicating limitations and risks openly, ensuring that users understand the scope and context in which AI-generated insights should be applied.

By embedding these practices, organisations create a foundation of accountability that respects the socio-technical nature of AI as it isn’t just a product of algorithms but a shared endeavour of technology and human values.

Collective responsibility: a team-based approach to ethical AI

Building and implementing responsible AI is a collective effort. From data scientists and product designers to customers and customer experience specialists, each stakeholder helps ensure that AI systems are ethical, transparent, and bias-free. At Elsevier, this manifests as a holistic approach focused on education, feedback, and shared values at every stage.

An example of this approach in motion is Scopus AI, which leverages generative AI to curate research summaries from one of the largest databases of peer-reviewed journals in the world. It is designed with robust safeguards, informed by librarians and researchers, and grounded in verified, reliable content. The teams responsible for the platform use a clearly defined ethical framework of transparency, accountability, and privacy. The process involves continuous feedback from users to fine-tune, address real-world needs, and maintain trust.

Key elements of the framework include:

  • Training and educating users to recognise biases and limitations in AI outputs.
  • Incorporating user feedback in development cycles so AI systems can evolve in response.
  • Defining principles of responsibility so that AI tools meet standards of ethics and trust.

To evaluate the potential risks associated with each AI deployment, we also conduct algorithmic impact assessments with a diverse team of data scientists familiar with the datasets and product designers who understand user needs, while end-users provide critical feedback. Cross-functional teams build AI solutions that are not only technically robust but also socially responsible and user-centred.

As AI continues to evolve, a human-centred approach is essential. But responsible AI is more than just a “human touch.” It’s about embedding humanity into every aspect of the technology — committing each decision to ethics, dignity, and societal well-being. We are responsible for ensuring that AI empowers positive outcomes, works to prevent bias, and supports knowledge discovery without sacrificing integrity.

Responsible AI is the cornerstone of true innovation and business success. Without it, the potential to transform industries could become overshadowed by risks, hallucinations, inefficiencies, and public mistrust. By committing to responsible practices, we can advance trustworthy and effective tools that accelerate progress, drive discovery, and enrich our understanding of the world. It is this commitment to ethical, people-centred AI development that will set industry leaders apart.

Harry Muncey is Senior Director of Data Science and Responsible AI at Elsevier

Back to top