Saturday, May 31, 2025
No menu items!
HomeNatureFake AI images will cause headaches for journals

Fake AI images will cause headaches for journals

AI generated fake x-ray images that fooled medical experts. The images in the top row of the picture are generated by the AI. The X-rays in the bottom row are real knee X-rays.

Reliable tools to detect AI-generated images in research papers are not yet available.Credit: University of Jyväskylä

Microscopic images of tissue samples can be generated so convincingly by artificial intelligence (AI) that journal editors, peer reviewers and readers are being warned to take a much closer look when reading papers.

A study that presented people with real and fake histological images — used to study the microscopic structure of tissues — found that many participants could not tell the difference. To make it easier to spot the fakes, the authors are urging journal editors to require raw data from researchers wanting to publish papers containing such images.

In the study, 816 undergraduate students at German universities were given eight genuine and eight fake histological images, and were asked to identify which were real. Of the 290 participants who said they hadn’t seen such images before, 55% could classify them correctly. Among the 526 participants who were familiar with such images, 70% classified them correctly, according to the study, published in Scientific Reports last November1.

“I was really surprised how little it took to generate images that looked like true histology,” says study co-author Ralf Mrowka, a nephrologist at Jena University Hospital in Germany.

Researchers and publishers use many tools and techniques to spot fake images. There are several AI-based systems on the market that can detect common signatures of doctored images such as the replication, deletion or splicing of part of the image, for example. But those approaches can’t reliably detect AI-generated images, Mrowka says.

Such tools are in development. For example, in March, the Vienna-based company ImageTwin announced that it is beta-testing a software function to detect AI-generated images. But it’s going to be difficult for these tools to keep up with rapidly advancing image-generation technology, Mrowka warns.

The fabrications have reached a level of quality “that will give editors, reviewers and readers a hard time”, he says. “Once I have trained an AI model to generate this type of data, I can generate endless new fake images” and also make modifications to them, he adds.

A challenge for peer reviewers

Chhavi Chauhan, founder of Samast AI, a technology company in Cary, North Carolina, says she was surprised that the study categorized undergraduates as experts, despite having limited experience with histological images. She thinks the results might vary if the participants were actual specialists.

Chauhan, who is also director of scientific outreach at the American Society for Investigative Pathology in Rockville, Maryland, adds that the fact that the student participants were told there was a mix of real and fake histological images put them in a much better position to spot the fakes than peer reviewers of papers would be. When peer reviewers are assessing manuscripts, many wouldn’t scrutinize the histological images as closely as they would in a scenario where they knew there were fakes, she says.

Some image-generation tools, such as Google DeepMind’s SynthID, insert watermarks, which peer reviewers can be trained to look for. But it’s easy to bypass that feature in freely available tools, Mrowka says.

RELATED ARTICLES

Most Popular

Recent Comments