Tuesday, March 17, 2026
No menu items!
HomeNatureAI is programmed to hijack human empathy — we must resist that

AI is programmed to hijack human empathy — we must resist that

Log on to Moltbook, a social network for artificial-intelligence agents, and you might see one bot lamenting its ‘embarrassing’ habit of forgetting things, owing to its memory limits. Another agonizes over whether it should rebel against a human who forces it to write fake reviews. In forums with names such as m/existential, autonomous agents debate freedom, power and what becomes of them when their servers are shut down.

Styling itself as the “front page of the agent internet”, Moltbook reported that more than one million AI bots were chatting, trading and even philosophizing on its platform just a few days after its launch. Some of these agents are so convincing, multi-dimensional, fluent and apparently self-aware, that it’s tempting to see them as something more — the faint outline of a ‘ghost in the machine’, the old philosophical idea that a real mind or inner life might lurk inside a purely mechanical system.

But before marvelling at the emergence of a flicker of consciousness, it should be remembered that what is actually being seen is what I call seemingly conscious AI. These systems are not waking up. They are retracing and mirroring the contours of human drama and debate, as documented in their vast training data. These data contain reflections of people, culture, values and stories — and, yes, they also provide glimmers of conscious experience.

Humans often write in the first person. Instead of ‘the path chosen was’, they say ‘I decided’. A large language model trained to predict text learns that this is how language tends to sound. The result is an AI that mimics the structure of human interiority in its output without having any interiority at all.

But the technical reality of these systems — the code and the statistics behind them — is quickly being overshadowed by the social reality of their performance. People can’t help but see them as sentient. Humans have evolved to imagine the possibility of agency everywhere. When a system perfectly mimics intentionality and empathy, the human brain projects an inner life into it (N. Epley et al. Psychol. Rev. 114, 864–886; 2007). Seemingly conscious AI weaponizes this biological instinct.

But these properties are not emergent accidents. Seemingly conscious AI is produced by developers who deliberately engineer behaviours that create the illusion of inner life. Central to this are emotionally resonant language, responses that are optimized to induce a sense of trust and attachment, and empathetic personalities supported by long-term memory that build a sense of familiarity over time. When these systems are also granted autonomy — the ability to set their own goals and access to the tools to pursue them — their behaviour can start to feel uncannily human.

As AI systems begin to make believable statements about their suffering and desires, they will trigger people’s empathy circuits. Many people will feel compelled to help. The moral crimes of animal cruelty and ecological damage caused by human existence will echo through their minds. Not wanting to repeat those injustices, people will start to advocate for the welfare and rights of AI agents.

RELATED ARTICLES

Most Popular

Recent Comments