
Many funding proposals submitted to the US National Institutes of Health and National Science Foundation are written with the assistance of artificial-intelligence programs.Credit: Getty
Scientists are increasingly turning to artificial-intelligence systems for help drafting the grant proposals that fund their careers, but preliminary data1 indicate that these tools might be pulling the focus of research towards safe, less-innovative ideas.
These data provide evidence that AI-assisted proposals submitted to the US National Institutes of Health (NIH) are consistently less distinct from previous research than ones written without the use of AI — and are also slightly more likely to be funded.
These results indicate that, if the trend continues, “we could be on a path towards homogeneity”, says Misha Teplitskiy, a science, technology and policy researcher at the University of Michigan in Ann Arbor. “This paper provides some suggestive evidence of that happening.”
The analysis, which has not yet been peer reviewed, was posted on the preprint server arXiv on 21 January.
Artificial editors
Since large language models (LLMs) became mainstream in 2023, most scientists have tried using AI tools to assist them with researching their ideas and writing and editing their manuscripts, according to a 2024 survey2. Dashun Wang and Yifan Qian, computational social scientists at Northwestern University in Evanston, Illinois, were curious whether this rapid and widespread adoption of AI tools has changed the type of science that is funded.
AI is transforming peer review — and many scientists are worried
Typically, grant applications, particularly those that are unsuccessful, are not easily accessible, making it difficult to answer this question. But Wang and Qian gained access to and analysed thousands of grant proposals that were submitted to the US National Science Foundation (NSF) and the NIH by researchers at two large US universities between 2021 and 2025. They also collected data from publicly available grant abstracts in a database of federally funded grants.
To spot AI-tool use, the researchers took thousands of these public abstracts from 2021 — written before the rise of OpenAI’s ChatGPT chatbot — and asked an AI model to rewrite them. By comparing the human and AI versions of the same text, the team learnt the telltale signs that separate human writing from machine-generated text. They then used this knowledge to analyse grant applications at the two universities and give each one a score that was representative of how much of the writing contained AI-associated patterns. A higher score meant that a larger portion of the proposal showed signs of being drafted or edited by AI tools.
AI avoidance
The use of AI for writing grant proposals spiked in early 2023, following the public release of ChatGPT, the researchers found. But it wasn’t adopted by everyone: the study revealed that there was a group of scientists who mostly avoided AI when preparing their proposals. Those who did use AI tools generated roughly 10–15% of their text with them.
The outcomes for AI-assisted proposals differed by funding agency. At the NIH, high AI involvement was linked to a 4% jump in the likelihood of a successful application, compared with low AI involvement. It was also associated with a 5% increase in the number of papers published about the funded work, compared with applications with low levels of AI involvement. This jump in papers was mostly made up of articles that were not highly cited. At the NSF, however, the authors detected no benefit to using AI tools.
“This link between LLM use and NIH award rates is striking and substantial,” Teplitskiy says.


