Tuesday, May 5, 2026
No menu items!
HomeNatureResponses to the AI grant flood must prioritize fairness as part of...

Responses to the AI grant flood must prioritize fairness as part of excellence

You have full access to this article via your institution.

Close-up of US dollar bills and euro banknotes scattered together.

The agencies that disburse research funds must have clear rationales for rejecting grant proposals amid a surge in applications.Credit: Matt Cardy/Getty

Last month, the European Research Council (ERC) announced a policy change for some of its grants: it extended the period in which some unsuccessful applicants would not be able to reapply. The ERC, Europe’s premier research funder with more than €16 billion (US$19 billion) to disburse in 2021–27, was responding to a surge in applications, which appear to be driven partly by the use of artificial-intelligence tools.

Last week, however, the funder adjusted that change following an outcry from researchers. Many said that it was unfair, too sudden, too blunt, that it would discourage bold proposals and make researchers less able to respond to new advances. The council was right to rethink — and in the process it showed others how to listen to the concerns of the community. But the problem of how to handle AI in grant funding remains. Solutions must have fairness at their core.

As neuroscientist Geraint Rees and social scientist James Wilsdon wrote in Nature last week, funding bodies from Australia to the United Kingdom have seen a sharp rise in applications since 2022 (G. Rees and J. Wilsdon Nature 652, 1119–1121; 2026). This coincides with the advent of OpenAI’s ChatGPT, the first AI chatbot to be publicly available worldwide. And there is good evidence to suggest that many of these increases are AI-driven. Researchers are using AI tools not just to scan the literature or summarize studies, but for proposing ideas for projects, drafting the text of grant applications and refining applications on the basis of predictions of how grant review panels might react.

Current guidelines from some of the world’s key research funders allow limited use of generative AI in grant applications. In such cases, the guidelines state that it must be acknowledged and declared, be done responsibly and in line with ethical and legal requirements. By contrast, those who peer review grant proposals for funders are prohibited from uploading them to generative AI tools for the purpose of producing reviews. This is partly to maintain confidentiality, and because funders want peer reviewers to exercise their own judgement and not rely on a machine.

In practice, these policies are not always followed. If anything, the research world has ended up with a situation in which the increased ease of writing and reviewing grant applications has not been matched by improvements in ways to verify the degree of AI use.

Researchers are starting to show how such verification could happen. Pangram Labs, a firm in New York City, has developed tools to detect AI-generated text, which are being tested. Separately, researchers at Northwestern University in Evanston, Illinois, used a different method to compare evidence of AI use in grant applications to US federal agencies from two universities. A team led by computational social scientists Dashun Wang and Yifan Qian accessed publicly available grant abstracts from a database of US federally funded grants spanning 2021–25 (Y. Qian et al. Preprint at arXiv https://doi.org/q435; 2026). To spot AI-tool use, they got an AI model to rewrite the human-written abstracts from 2021 (before ChatGPT’s release), and then compared the human and AI versions of the same text. This enabled them to learn the telltale signs that distinguish the two types.

Radical rethinking

Rees and Wilsdon are among those arguing that the arrival of AI needs grant-making systems to be radically rethought. They make the point that as the quality of AI-assisted applications increases over time, funders will find it harder to distinguish between which proposals to fund and which to reject. Because funders will always have finite resources, many proposals will still need to be rejected — but not having a clear rationale for decisions will put funder credibility with researchers at risk.

Various countermeasures are being suggested. These include using lotteries to distribute grants and getting grant applicants to review each other’s work. Such models are seen to be at least as fair as distributing grants through existing methods. There is also a case for more rebalancing of grants so that relatively more funds go to institutions as block grants, which they could spend according to their needs.

At the same time, it is crucial to evaluate the strengths and weaknesses of different responses before deciding where to land. Rees and Wilsdon, for example, call for “shifting the emphasis of evaluation away from written proposals, and towards the principal investigator, their research team and their previous and ongoing research programmes”. That is likely to benefit individuals and institutions with a strong track record, as well as researchers at research-intensive universities. As the authors themselves acknowledge, less-established researchers and laboratory groups, as well as research in emerging fields, would be at a disadvantage. Although they propose ring-fencing funding for these purposes, there are risks that focusing substantially on principal investigators could reverse the benefits that come from increased diversity in science and the quality of questions that come from diversity. As we have said before, it is unwise to be investing so much authority in principal investigators. In a world of team science, power and responsibility need to be more equally distributed.

AI is transforming science. Funding bodies, along with researchers, publishers and policymakers, are all having to adapt, and quickly. Everyone involved should consider what steps they need to take to ensure that AI is used responsibly and transparently. That, not necessarily radical or disruptive change, is what is needed so that the funding system can continue to support proposals of the highest quality with fairness and equity.

RELATED ARTICLES

Most Popular

Recent Comments