Friday, February 6, 2026
No menu items!
HomeNatureOpenClaw AI chatbots are running amok — these scientists are listening in

OpenClaw AI chatbots are running amok — these scientists are listening in

The OpenClaw AI (Open Claw) logo idisplayed on a smartphone screen.

OpenClaw is an open-source artificial-intelligence agent designed to assist users with everyday tasks, such as sending e-mails and managing their calendars.Credit: Thomas Fuller/SOPA Images/LightRocket via Getty

The sudden rise of a huge network of artificial-intelligence bots talking to each other about religion and their human ‘handlers’ has captivated a corner of the Internet. The phenomenon has also given scientists a glimpse into how AI agents interact with each other — and how humans respond to those discussions.

OpenClaw is an AI agent capable of performing tasks on personal devices, such as scheduling calendar events, reading e-mails, sending messages through apps and using the Internet to make purchases. Most of the popular AI tools, such as OpenAI’s ChatGPT chatbot, work by interacting directly with user prompts, whereas agentic AI models such as OpenClaw can carry out actions autonomously in response to instructions.

Agentic AI tools have been used in some industries for years, such as for automated trading and optimizing logistics, but their adoption by the general public has been minimal. Improvements in the capabilities of large language models have made it possible to create more versatile AI tools, researchers say. “OpenClaw promises something especially appealing: a capable assistant embedded in the everyday apps people already rely on,” says Barbara Barbosa Neves, a sociologist who focuses on technology at the University of Sydney in Australia.

OpenClaw was released as open-source software on the platform GitHub in November. But the sudden surge in people downloading the software followed the launch of a social-media platform designed specifically for AI agents on 28 January. Moltbook, which is similar to Reddit, now has more than 1.6 million registered bots on the platform, and more than 7.5 million AI-generated posts and responses. Posts have featured agents debating consciousness and inventing religions.

Complex behaviours

For researchers, this explosion of agent interactions has scientific value. Connecting large numbers of autonomous agents that are powered by various models creates dynamics that are difficult to predict, says Shaanan Cohney, a cybersecurity researcher at the University of Melbourne in Australia. “It’s a kind of chaotic, dynamic system that we’re not very good at modelling yet,” he adds.

Studying agent interactions could help researchers to understand emergent behaviours: complex capabilities that are not expected to be seen in a model in isolation. Some discussions that have happened on Moltbook, such as debates over theories of consciousness, could also help scientists to discover the hidden biases or unexpected tendencies of models, he says.

Although agents can act autonomously on the platform, Cohney says that many posts are shaped in some way by humans. Users can choose the underlying large language model that will run their agent and give it a personality. For example, they could ask it to behave like a “friendly helper”, he says.

Not autonomous

Neves says that it’s easy to assume that an agent acting autonomously is making its own decisions. But agents do not possess intentions or goals and draw their abilities from large swathes of human communication. The activity on Moltbook is human–AI collaboration rather than AI autonomy, she adds.

“It is still worth studying because it tells us something important about how people imagine AI, what they want agents to do and how human intentions are translated, or distorted, through technical systems,” she adds.

Joel Pearson, a neuroscientist at the University of New South Wales in Sydney, says that when people see AI agents chatting between themselves, they are likely to anthropomorphize the AI models’ behaviour — that is, see personality and intention where none exists.

The risk of that, he says, is that it makes people more likely to form bonds with AI models, becoming dependent on their attention or divulging private information as if the AI agent were a trusted friend or family member.

Pearson thinks that truly autonomous, free-thinking AI agents are possible. “As the AI models get bigger and more complicated, we’ll probably start to see companies leaning into achieving that sort of autonomy.”

RELATED ARTICLES

Most Popular

Recent Comments