Thursday, April 24, 2025
No menu items!
HomeNatureScience sleuths flag hundreds of papers that use AI without disclosing it

Science sleuths flag hundreds of papers that use AI without disclosing it

A mature man holding a pen looks at a large computer screen displaying text from an AI chatbot.

Credit: Laurence Dutton/Getty

“As of my last knowledge update”, “regenerate response”, “as an AI language model” — these are just a few of the telltale signs of researchers’ use of artificial intelligence (AI) that science-integrity watchers have found sprinkled through papers in the scholarly literature.

Generative AI tools such as ChatGPT have quickly transformed academic publishing. Scientists are increasingly using them to prepare and review manuscripts, and publishers have scrambled to create guidelines for their ethical use. Although policies vary, many publishers require authors to disclose the use of AI in the preparation of scientific papers.

But science sleuths have identified hundreds of cases in which AI tools seem to have been used without disclosure. In some cases, the papers have been silently corrected — the hallmark AI phrases removed without acknowledgement. This type of quiet change is a potential threat to scientific integrity, say some researchers.

Such changes have appeared in a “small minority of journals”, says Alex Glynn, a research literacy and communications instructor at the University of Louisville in Kentucky. But given that there are probably also many cases in which authors have used AI without leaving obvious signs, “I am surprised by how much there is”, he adds.

‘I am an AI language model’

Since 2023, integrity specialists have flagged papers with obvious signs of undisclosed AI use, such as those that contain the phrase “regenerate response”, generated by some chatbots based on large language models when a user wants a new answer to a query. Such phrases can appear in articles when an author copies and pastes a chatbot’s responses.

One of the first cases that Glynn recalls seeing was in a now-retracted paper published in 2024 in Radiology Case Reports1 that contained the chatbot phrase “I am an AI language model”. “It was as blatant as it could possibly be,” Glynn says. “Somehow this passed not only the authors’ eyes, but the editors, reviewers, typesetters and everyone else who was involved in the production process.”

Glynn has since found hundreds more papers with hallmarks of AI use — including some containing subtler signs, such as the words, “Certainly, here are”, another phrase typical of AI chatbots. He created an online tracker, Academ-AI, to log these cases — and has more than 700 papers listed. In an analysis of the first 500 papers flagged, released as a preprint in November2, Glynn found that 13% of these articles appeared in journals belonging to large publishers, such as Elsevier, Springer Nature and MDPI.

Artur Strzelecki, a researcher at the University of Economics in Katowice, Poland, has also gathered examples of undisclosed AI use in papers, focusing on reputable journals. In a study published in December, he identified 64 papers that were published in journals categorized by the Scopus academic database as being in the top quartile for their field3. “These are places where we’d expect good work from editors and decent reviews,” Strzelecki says.

Nature’s news team contacted several publishers whose papers had been flagged by Glynn and Strzelecki, including Springer Nature, Taylor & Francis and IEEE. (Nature’s news team is editorially independent of its publisher, Springer Nature.) All said that the flagged papers are under investigation. They also pointed to their AI policies — which, in some cases, do not require disclosure of AI use or require it only for certain uses. Springer (owned by Springer Nature), for example, states that AI-assisted copy editing, which includes changes made for readability, style, and grammar or spelling errors, need not be flagged.

Kim Eggleton, head of peer review and research integrity at IOP Publishing in Bristol, UK, notes that although the publisher introduced a policy requiring authors to declare AI use in 2023, it changed the rules last year to reflect the ubiquity of the tools. “While we encourage authors to disclose the use of AI, it is no longer mandated,” Eggleton says. “We are focusing on ensuring the accuracy and robustness of the content through a combination of automated and human checks, rather than prohibiting AI completely.” IOP’s policy does, however, prohibit the use of AI to “create, alter or manipulate” research data or results.

RELATED ARTICLES

Most Popular

Recent Comments