Thursday, July 3, 2025
No menu items!
HomeNatureThis AI ‘thinks’ like a human — after training on 160 psychology...

This AI ‘thinks’ like a human — after training on 160 psychology studies

Conceptual illustration of a brain network as AI technology.

Researchers have developed an AI tool that simulates human decision-making across a variety of tasks.Credit: Jezperklauzen/Getty

An innovative artificial-intelligence (AI) system can predict the decisions people will make in a wide variety of situations — often outperforming classical theories used in psychology to describe human choices.

The researchers who developed the system, called Centaur, fine-tuned a large language model (LLM) using a massive set of data from 160 psychology experiments, in which 60,000 people made more than 10 million choices across many tasks.

Most computer models and cognitive theories stick to a single task. For instance, Google Deepmind’s AlphaGo can only play the strategy game Go, and prospect theory can only predict how a person will choose between potential losses and gains. Centaur, by contrast, can simulate human behaviour across a spectrum of tasks — including gambling, memory games and problem-solving. During testing, it was even able to predict people’s choices in tasks it had not been trained on. The development of Centaur is described in a paper published today in Nature1.

The team that created the system thinks that it could one day become a valuable tool in cognitive science. “You can basically run experimental sessions in silico instead of running them on actual human participants,” says study co-author Marcel Binz, a cognitive scientist at the Helmholtz Institute for Human-Centered AI in Munich, Germany. That could be useful when conventional studies would be too slow, he says, or when it’s difficult to recruit children or people with psychiatric conditions.

“Building theories in cognitive science is very difficult,” says Giosuè Baggio, a psycholinguist at the Norwegian University of Science and Technology in Trondheim, Norway. “It’s exciting to see what we can come up with with help from machines.”

Stiff competition

Scientists have long struggled with using task-specific models to simulate broad aspects of human behaviour because the tools cannot generalize to a multitude of tasks.

Binz and his colleagues wanted to overcome this limitation. They spent five days fine-tuning Llama — an LLM released by the technology firm Meta in Menlo Park, California — with a huge behavioural data set called ‘Psych 101’.

The researchers tuned the model to predict not just an average behaviour for a given task, but the range of typical behaviours in the population. They then tested how well it predicted the behaviour of participants who were not included in the training data set. In all but 1 of 32 tasks, Centaur outperformed Llama and 14 cognitive and statistical models in predicting what choices participants would make. That one outlier was a task in which participants judged whether sentences were grammatically correct.

RELATED ARTICLES

Most Popular

Recent Comments