Thursday, March 12, 2026
No menu items!
HomeNatureAI can 'same-ify' human expression — can some brains resist its pull?

AI can ‘same-ify’ human expression — can some brains resist its pull?

A repeated diagonal pattern of many red toy robots on a pale yellow background, with a translucent orange blocky overlay

Some scientists warn that human expression is being homogenized by exposure to AI tools.Credit: Getty

At the heart of today’s artificial-intelligence models are vast bodies of training data — text, videos and images created by real people and used to teach models how to recognize patterns and generate content. People are certainly training AI systems — but are they training us, as well?

A growing number of papers report that people tend to pick up writing patterns, reasoning methods and even opinions from the large language models (LLMs) they use. Some researchers say that this influence threatens to create a sameness among human writing and warn that the effect could even extend to text written by humans who aren’t first-hand AI users.

“If people around you are interacting with these LLMs and adopting their writing styles, perspectives and reasoning, at some point it would surround you so much that it would seem like the more socially correct way to frame information,” says Zhivar Sourati, a computer scientist at the University of Southern California in Los Angeles and co-author of an opinion piece1, published today in Trends in Cognitive Science, arguing that LLMs are homogenizing human discourse.

But others assert that the human mind might still resist AI’s flattening effect. In one study2, which was posted as a preprint to the arXiv server in November, the authors identified groups of writers who preserve “distinctively human stylistic signatures, possibly valuing authenticity over efficiency gains offered by AI assistance”. The study has not yet been peer reviewed.

Delving into the matter

In a preprint3 posted on arXiv last year, which has not undergone peer review, Sourati and his co-authors analysed Reddit posts, news content and pre-print studies from both before and after ChatGPT was launched in 2022. The team found that text published after the platform’s release tends to be less stylistically diverse than text from before.

In today’s opinion piece, the authors argue that this phenomenon plagues people’s perspectives and reasoning as well. They point to an unreviewed 2023 preprint4, posted on arXiv, in which participants engaged with LLMs that expressed either positive or negative feelings about social media. After this exposure, participants’ own opinions shifted towards those produced by LLMs.

The authors “make a fair point” in saying that individuals can benefit from AI, which “helps you write better and makes you more understandable to other people”, says Oliver Hauser, who researches economics and AI at the University of Exeter, UK. “But as soon as this adoption is widespread, the collective suffers the most.”

Getting political

In a study5 published today in Science Advances, researchers found that people’s opinions on social issues start to echo those they’ve picked up from an AI tool. Participants used AI helpers to write about sociopolitical topics, such as the death penalty. Afterwards, they tended to express attitudes on those topics more similar to what the LLMs wrote than did a control group that didn’t use AI. This effect could also eventually reduce diversity of political views, says co-author Sterling Williams-Ceci, an information scientist at Cornell University in Ithaca, New York. But the precise effect would depend on the leanings that the various LLMs express.

RELATED ARTICLES

Most Popular

Recent Comments