
A return to pen-and-paper and in-person surveys might be the only way to protect against sophisticated AI bots, some researchers argue.Credit: Ronstik/Getty
A tool that has helped to transform modern social-science research is under threat thanks to artificial intelligence. Researchers are warning that a wave of chatbots impersonating people could corrupt or invalidate the online surveys that power thousands of studies every year. They are urging the companies that run the surveys to do more to address the problem.
How Facebook, Twitter and other data troves are revolutionizing social science
Since the early 2000s, online surveys that allow people to participate in research studies from the comfort of their own desktops have been used in fields such as ecology, psychology, economics and politics. They have become “essential infrastructure” of the social sciences, says Felix Chopra, a behavioural economist at the Frankfurt School of Finance and Management in Germany, who uses such surveys in his research.
People get paid to participate in online surveys – anywhere from pennies to US$100 or more per hour. And an industry was created to administer the surveys and manage vast pools of potential respondents. Between 2015 and 2024, the use of online surveys in published studies increased four-fold, and with that explosion came people trying to game the system, from simply giving fake answers to deploying bots that impersonate individuals; the industry has had to build in checks and tools to root out fraud.
The problem with survey bots
In November, Sean Westwood, a political scientist at Dartmouth College in Hanover, New Hampshire, demonstrated an AI chatbot that can reliably impersonate a human participant and evade most known mechanisms built into surveys to detect fake responses1.
Westwood used OpenAI’s o4-mini, an AI-based reasoning tool, to build a bot and set it loose on a survey that he designed for the purpose of testing it. In 6,700 tests, the bot managed to pass standard ‘attention check’ questions (which are designed to catch inattentive humans and simple bots) 99.8% of the time.
To help it evade detection, the bot could be programmed with a persona and use reasoning in line with that persona. For example, when it was programmed to answer as an 88-year-old woman and was asked about time spent at children’s sporting events, the bot said that it spent little time at them because its children had grown up. And it remembered its answers to previous questions.
Westwood’s bot also breezed past common questions that are placed into surveys to trip up bots by detecting capabilities that most people do not have. The bot declined to translate a sentence into Mandarin, for example, and it pretended it could not quote the US constitution verbatim.
The ease with which it evaded detection led Constantine Papas, a blogger and user-experience researcher at a big technology firm based in New York, to declare a “scientific validity crisis”. He wrote that “the foundational assumption of survey research (that a coherent response is a human response) is no longer tenable”.
This AI chatbot got conspiracy theorists to question their convictions
Ryan Kennedy, a political scientist at the Ohio State University in Columbus, agrees that the paper exposes serious flaws in the use of online surveys, but he stops short of calling it a full-blown crisis. AI tools are just the latest skirmish in an ongoing arms race, he says. In the past, individuals might have pretended to be someone else so that they could receive payment for participating in a survey, but now automation is on the rise.



