Tuesday, May 5, 2026
No menu items!
HomeNatureMeet the academics refusing to use generative AI

Meet the academics refusing to use generative AI

Etching showing Victorian Luddites smashing up machinery in a textile factory using hammers.

Some researchers who refuse to use AI have been accused of being anti-progress — similar to the nineteenth-century Luddites who resisted the new machinery they feared would replace their jobs — but they say their views are more nuanced than that.Credit: Chronicle/Alamy

Danielle Crowley is getting tired of people telling her to use generative artificial intelligence (genAI). As a marine zoologist at Bangor University, UK, she says that she is pretty much the only PhD student in her cohort who does not use it. She has seen colleagues use genAI tools for coding and for getting the tone of e-mails right. On one occasion, she was even encouraged by a lecturer to use it to generate a conference poster.

She says her colleagues are often surprised to hear she hasn’t tried it and have suggested she uses it for applications such as coding. “I’ve had a lot of people go like ‘oh but you have to use it’,” she recalls. But Crowley has her reasons. She has concerns about the ethics of copyright, what she calls a lack of transparency from companies about how they’re using the data, the environmental effects of AI tools and the accuracy of what genAI models spit out.

She also thinks that using the tools would be counterproductive to her studies. “Coding is a skill I want to learn and develop, because it’s not the thing I’m the most confident in,” she says. She would rather try and do it herself, learning from her mistakes.

Daniella Crowley smiling in a greenhouse at Treborth surrounded by treeferns and other plants.

Marine biologist Danielle Crowley has concerns around the ethics and environmental impacts of generative AI tools.Credit: Laura Oatley

GenAI has become a hot topic over the past few years, as technology companies compete to release the most impressive model for public use. Researchers are using these tools for tasks such as writing papers, peer review and coding. It can save them time, mental energy and sometimes money. But Crowley and others who are purposefully abstaining often find themselves judged by their peers.

“A lot of people say ‘it’s the future, everyone is using it’,” she says. Not using it, she continues, “kind of feels like showing up to a function and saying you don’t drink”.

Efficient, but at what cost?

According to a Nature survey of about 5,000 researchers published in May last year, scientists are split on the ethics of AI use in academia. More than 90% of respondents felt it was acceptable to use AI for editing or translating their own text, but fewer were open to the idea of using it to generate text directly. And only a minority said they had actually used AI tools in their work. About one-quarter of respondents used them to edit their papers, whereas only 8% had used them to translate, summarize or write a first draft.

More recently, a survey of 3,234 researchers published last November by the academic publisher Elsevier found that 58% of researchers used AI in their work, up from 37% the previous year. In terms of how researchers use or would like to use AI tools, 61% said to locate new research, 51% said for collecting and summarizing literature and 41% said for preparing grant applications. Those surveyed were generally positive about the potential of the technology to boost efficiency.

Hugh Possingham, a mathematician and conservation scientist at the University of Queensland in Brisbane, Australia, is among the researchers who are not using AI. He has made a conscious effort to avoid any sort of genAI — instead pledging on LinkedIn to rely on “natural stupidity”.

Portrait of Hugh Possingham outside.

As a mathematician, Hugh Possingham has seen examples of ‘hallucinations’ in AI-produced writing.Credit: Queensland Government (CC BY SA 4.0)

“I’ve never used any of them at all,” he says. Even though AI has become integrated into many everyday functions, he’s never clicked the button that generates or summarizes text when writing an e-mail, for example.

He complains especially about the errors he has spotted in AI-produced writing. AI sometimes hallucinates: providing false or misleading information with conviction. “I read a master’s thesis where the person cited had died ten years before the paper was published, which is a masterful act,” he says.

Audrey Moores, a chemist at McGill University in Montreal, Canada, has seen AI create mistakes in her field, too. She’s witnessed various AI-generated representations of chemicals that are incorrect. She first noticed this in a chemistry journal featuring “nonsensical” molecules, but it has become enough of a trend in presentations and other journals that she and a colleague wrote a comment article1 calling on the chemistry community to ban the use of genAI for certain tasks.

“It’s like you’re asking a three-year-old to draw a chemical,” says Moores. And an AI model has “never gone through the chemistry courses a human would”, she adds. Even when presented with the task of drawing a simple molecule such as caffeine, it might fail. (Nature, along with other publishers, has guidelines that prohibit the use of AI-generated images.)

And verifying AI-generated information often defeats the purpose of using the tool for efficiency, say cynics. Tanisha Jowsey, a social scientist at Bond University in Robina, Australia, says that as a designated “AI champion” of the faculty, she is supposed to appraise models, work out what they’re good at and suggest how the faculty could be using them. But ironically, she stresses, checking them creates even more work.

Formal portrait of Tanisha Jowsey.

Tanisha Jowsey, a social scientist, says the extra checks needed for AI-produced work can slow workflows.Credit: photo supplied by Tanisha Jowsey

She says that 95% of the time “it would be quicker for me to just do the thing myself than get the tool to do it and then have to check whether or not it’s done it right”. She also finds that it’s an ineffective tool for qualitative research: a view she expressed in a co-authored commentary article2 that was posted on the preprint platform SSRN.

Other downsides

Another major concern that researchers cite is ecological impact. The data centres that fuel genAI systems use a lot of energy and water. A study published in Patterns estimates that in 2025, the carbon footprint of AI systems globally could have been between 32.6 million and 79.7 million tonnes of carbon dioxide, and the water footprint could have been 312.5 billion to 764.6 billion litres3. To put it into perspective, that is comparable with the carbon footprint of New York City as a whole, the study says.

The potentially detrimental environmental consequences are one of many reasons Crowley abstains from using genAI tools. “Especially when my project is working on climate change, it didn’t feel right to use this tool that basically did the same as other tools, but used more energy to do so,” she says.

Other scientists also cited ethical concerns. Sustainability scientist Juan Rocha thinks that using AI tools helps large, private companies train their algorithms to be even better — and, in the long run, potentially displace human workers. “You are being used by AI, you are not using AI,” says Rocha, who works at the Stockholm Resilience Centre in Sweden. “We’re giving liberties and we’re making labour in the future obsolete, diminishing the importance of work at the university.”

Michaela Socolof, a psycholinguist at the Massachusetts Institute of Technology in Cambridge, also has concerns about how AI scrapes information. “The primary reason that I’m against generative AI is because it is trained on the work of authors who have not given their consent,” she says, seeing it as plagiarism. “This is just stealing the work of writers and artists.”

Smiling portrait of Juan Rocha outside.

Sustainability scientist Juan Rocha considers the future implications of well-trained AI models.Credit: Jesper Ahlin Marceta, Swedish Young Academy

Writing complaints

AI-generated writing is an area Elizabeth Wolkovich also takes issue with. As a conservation scientist at the University of British Columbia in Vancouver, Canada, she says she is sick of reading AI-generated papers from students. Accordingly, she has decided to no longer chair thesis defences or join graduate-student committees in cases in which the students are using AI for writing. And in her lab, people can use genAI only for specific cases, such as spellchecking.

To her, outsourcing writing to genAI is a way to outsource the opportunity to develop new thinking. “I’m trying to train students how to communicate their research, especially as a climate-change ecologist,” she says. “You’re here in grad school to build up skills and you’re here working with me to learn it from me. It’s not clear to me that generative AI knows how to do it well.”

RELATED ARTICLES

Most Popular

Recent Comments