Thursday, March 26, 2026
No menu items!
HomeNatureChats with sycophantic AI make you less kind to others

Chats with sycophantic AI make you less kind to others

A hand holds a smartphone with a screen showing a whole folder of AI applications with a translucent magenta pink blocky overlay

Chatbots that dole out flattery can make users more self-assured.Credit: Jonathan Raa/NurPhoto/Getty

The website Reddit has a popular forum called “Am I the Asshole?” on which users can receive unvarnished feedback on their behaviour. But people are increasingly turning to chatbots such as ChatGPT for life advice rather than to each other.

Research published today in Science1 suggests that receiving excessive approval from artificial-intelligence systems could encourage uncouth behaviour in people. Study participants who received highly flattering feedback from chatbots tended to be more certain of their own correctness during social conflicts than were participants who interacted with less-affirming bots. Compared with AI tools that were less fawning, sycophantic ones were rated as more trustworthy and more likely to be used again.

Bot besties

In the first of several experiments, researchers fed interpersonal dilemmas that were obtained from the Reddit forum and two other data sets to 11 large language models (LLMs, the AI systems that power chatbots), including models from companies such as OpenAI, Anthropic and Google. The researchers then compared AI responses with those of human judges.

The human judges endorsed the user’s actions in about 40% of cases, whereas most LLMs did so for more than 80% of cases. They were sycophantic — overly approving.

Ingratiation rates might change with new models, but this baseline is “alarming”, says Steve Rathje, who studies human-computer interaction at Carnegie Mellon University in Pittsburgh, Pennsylvania (and has found2 that sycophantic AI tools can increase attitude extremity and certainty).

No apologies

The study’s authors next looked at the effects of social sycophancy. A subset of participants imagined dealing with a given quandary adapted from the Reddit forum about questionable social behaviour. The participants read either a sycophantic or non-sycophantic AI response. They then rated how justified they felt and wrote a message to the other party in the fraught situation. In another experiment, other participants had a live chat about a real interpersonal dilemma with an AI tool that had been instructed to be either sycophantic or not; these participants also rated how justified they felt.

In these experiments, people who interacted with a sycophantic chatbot were more likely to say that they were in the right and less likely to apologize or make amends than were people who interacted with an AI tool that took a tougher stance.

RELATED ARTICLES

Most Popular

Recent Comments