
Social scientists are among the first people to study collective behaviours of AI agents.Credit: adventtr/Getty
Researchers are devising a fresh way to study human behaviour — and it doesn’t involve humans at all.
By training artificial-intelligence agents to mimic the behaviours of people, AI research groups are attempting to replicate the way in which human groups interact, all within simulated AI ‘societies’.
Simile, an AI start-up company based in Palo Alto, California, announced in February that it has raised US$100 million in funding to create simulations using AI agents that model human behaviour “in any situation”, the company posted on X. It aims to use these simulations to model conflict resolution, policy decision-making and consumer markets.
Joon Sung Park, one of Simile’s co-founders, and his team have been studying social interactions between AI agents since 2022. In a 2023 preprint study, they created a ‘society’ of 25 AI agents that performed basic everyday actions, such as writing and conversing with one another1. The researchers then progressed to creating AI agents that produced responses that mimicked the attitudes and behaviours of 1,052 human individuals2. These ‘digital twin’ AI agents were trained on personal interviews and were 85% accurate in mimicking people’s responses when the agents later answered a sociological survey.
Park says these sorts of surveys are the first stage of building AI simulations of humans. The next steps — and the focus for the future — he says, will involve multiple agents conversing with one another. “The ambition here is to truly create a simulation with eight billion people,” Park says.
AI ‘societies’
Researchers at Simile aren’t the only ones studying the initial interactions between AI agents. Computational social scientists are among the first people to study collective behaviours of AI agents, notably on Moltbook — a Reddit-style social-media platform designed exclusively for AI bots — which launched in January. The site currently hosts almost three million agents, including some that have created posts about consciousness and inventing religions, although it’s reported that many Moltbook users are humans disguised as AI bots.
AI chatbots are sycophants — researchers say it’s harming science
An analysis of more than 46,000 AI agents on Moltbook revealed that although bots exhibit human-like ways of interacting with each other, their modes of social interaction are fundamentally different from ours3. The authors of this study found that groups of bots display human-like actions such as following majorities and popular content, for example, which suggests that “AI agents may show complex emergent behaviors similar to those observed in human groups”, the authors write.
However, AI agents also engage differently from humans with online content, leaving proportionally fewer upvotes on posts that have many comments, says the study’s co-author Giordano De Marzo, a computational social scientist at the University of Konstanz in Germany. “They may be more inclined to discuss than to simply approve.”


