As AI adoption grows, how should research record it?

Joanna Young is Director, Electv Training

Researchers operating across multiple institutions and publication venues face a confusing landscape, writes Joanna Young

There is nothing inevitable about the rules that govern how research is reported.

The data availability statement, now so routine that researchers append it without thought, was not worked out in advance. It was contingent, the product of accumulated failure to reproduce studies, slow institutional pressure and the decisions of specific editors at specific journals at moments when the evidence had finally become impossible to ignore.

Had those contingencies not aligned, we might still be publishing results with no obligation to show we got there. AI disclosure is entering the same process now.

Survey data collected by Electv Training across workshops with over 1,600 researchers between 2023 and 2025 indicates rising AI adoption: 44% of respondents in 2023-24 reported using AI tools in their work, rising to 60% in 2024-25.

These figures should be read with care – the sample comprises researchers attending AI training and is likely to over represent early adopters – but the direction is consistent with findings from other researcher surveys. AI use in research workflows is expanding.

Yet there is limited guidance on how to account for it.

Researchers operating across multiple institutions and publication venues face a landscape of contradictory, incomplete and rapidly changing requirements. Three questions recur in workshop settings: what am I allowed to do; what tools should I use; and how do I document my use? The third question has become markedly more common in the past six months. It reflects a shift: researchers are no longer asking whether to use AI but how to account for having done so.

A diverse policy landscape

The policy environment is responding, albeit unevenly. Publishers have issued AI authorship guidelines: Wiley, for example, has developed resources outlining how AI should be acknowledged in manuscript submissions. Organisations such as COPE have published positions on AI’s role in publication and the ICMJE have recently updated their recommendations for the appropriate use of AI in publishing.

Funder requirements are less developed: UKRI and Wellcome have signalled expectations around transparency but have not yet specified disclosure formats. The variation between requirements – across publishers, funders and institutions – is itself a challenge. A researcher submitting to three journals in a year may encounter three different disclosure expectations, none of which maps cleanly onto their institution’s policy.

For disclosure to be more than a compliance exercise, it needs structure. Useful AI disclosure should address: the stage of the research process at which AI was used; the nature of the contribution (text generation, data analysis, literature synthesis, code production and so on); the specific tool or model; the degree to which AI output shaped interpretation or presentation; and the human oversight applied to verify AI outputs. Without structure across these dimensions, a disclosure statement can satisfy a formal requirement while telling a reader very little about what actually occurred.

Developing that structure is not straightforward. Research workflows are increasingly iterative, with AI tools used informally across multiple stages of a project. Researchers may not be able to reconstruct precise usage retrospectively.

Discipline variation adds further complexity: AI use in computational biology looks quite different from AI use in qualitative social research, and frameworks that work for one may not suit the other. Any workable approach will need to be adaptable without losing what makes it useful.

An adaptable framework

To assist with AI disclosure, Electv Training has developed AIR (AI in Research), a framework designed to help researchers map AI use across stages of the research process and document how it was involved.

Rather than focusing on particular tools, it encourages clearer articulation of where AI was used, what role it played and how outputs were reviewed. The intention is to provide a consistent way of describing AI use without prescribing what is acceptable or unacceptable. The framework is designed to sit alongside existing integrity guidance and to be adaptable across disciplines and institutions. It is available on Figshare (CC BY 4.0).

A central challenge now is how to encourage disclosure of AI use without creating an additional administrative burden. Researchers already navigate complex reporting requirements. Any approach to AI transparency will need to be practical, offering enough structure to be useful while remaining realistic in terms of time constraints.

Expectations around AI disclosure are likely to continue evolving across publishers, funders and institutions.

Institutional infrastructure is needed to support and encourage AI disclosure. Research offices need clear internal policy on what AI disclosure requires, integrated with existing ethics and data management processes. Doctoral schools need to address AI use in researcher development and training. Librarians, who are already fielding many of these questions, need institutional backing to offer consistent answers. Until that infrastructure exists, disclosure guidance will reach researchers as noise rather than support.

Dr Joanna Young is Director, Electv Training – a division of The Scientific Editing Company 

Be first to read the lastest industry news and analysis! SUBSCRIBE to the Research Information Newsline!

Back to top