
A funding agency says its use of AI to sift through hundreds of grant applications makes the process more efficient.Credit: Tashatuvango/iStock via Getty
It can take weeks to write a grant proposal, so how would it feel to have a machine reject it in seconds? Researchers in Spain have been finding out, after a major funding foundation in the country turned to artificial intelligence (AI) for help with reviewing grant proposals.
“We are using a model made up of three AI algorithms to screen proposals that have low chances of being finally selected in our specific health research call,” says Inés Bouzón-Arnáiz, who works on research and scholarships at the non-profit La Caixa Foundation in Barcelona, Spain.
The foundation, which distributes €145 million (US$170 million) in research funds annually, uses the AI tool to sift through hundreds of applications to its annual flagship biomedical-funding programme, which offers three-year grants of up to €1 million. By training the model on successful applications from previous years, La Caixa says it can find and advance proposals that have the best chance of success more efficiently.
The AI-assisted review scheme has run for three years. In this year’s funding round, the algorithms highlighted 122 applications from a total of 714 as having a low chance of success. This decision was checked by two human reviewers, who rescued 46 applications initially flagged for rejection by the AI system. The remaining 76 were rejected. Of the 638 proposals then sent to specialists for peer review, just 34 were funded.
Pros and cons
The foundation’s shift towards using AI algorithms is the latest attempt by funders to address an ever-increasing burden placed on peer reviewers.
“It is difficult to find good reviewers, and if they have a big workload they won’t do it,” Bouzón-Arnáiz says. “We are taking away proposals that have no chance of being finally selected.” The move was partly prompted by reviewers who said they were often sent low-quality and immature proposals to assess, she adds.
The researchers using AI to analyse peer review
Although the use of algorithms in the peer-review process has seeped into scholarly publishing, La Caixa thinks it’s the first funder to incorporate AI algorithms into its decision-making process in this way. Indeed, many grant-giving organizations explicitly prevent both applicants and reviewers from using the technology. UK Research and Innovation, the UK’s national funder of research, says reviewers who break that rule could face could face a lifetime ban.
It’s a live and evolving issue, however, with fresh possibilities emerging all of the time. Earlier this year, the Federation of American Scientists, a non-profit organization in Washington DC, urged the US Office of Science and Technology Policy to subject grant applications to AI analysis. The organization said this would help “predict the future of science, enhance peer review, and encourage better research investment decisions by both the public and the private sector”. And researcher funders at Imperial College London are using an AI system to scan study abstracts to identify UK projects that they want to support.
Bouzón-Arnáiz says it’s easier for private funders such as La Caixa Foundation to experiment with AI tools and other alternative peer-review systems because they are not using public money. Still, other grant-giving bodies contacted by Nature had mixed reactions to La Caixa’s use of algorithms.
“We are very concerned about the possible breakdown in trust between researchers and funders if AI were to become an integral part of the evaluation process,” says Anders Smith, who leads the technical and natural sciences programme at the Villum Fonden, a philanthropic foundation in Søborg, Denmark. “The nightmare scenario is applications created by AI and subsequently evaluated by AI.” This could create a process that perpetuates the training biases of an AI system with no genuinely new ideas being created, he says.
By contrast, the Volkswagen Foundation, a private funder in Hanover, Germany, is assessing the technology and how it could be used. “We have started experimenting with it. However, we are really at the beginning and still need to set up all the regulatory guidelines and legal framework,” says Hanna Denecke, head of team exploration at the foundation.
Data security
Sebastian Porsdam Mann at the University of Copenhagen, who studies the practicalities and ethics of using generative AI tools in research, says it’s surprising that more funders haven’t already adopted AI systems to help with the peer-review process. “The potential for efficiency gains is even greater for grants than in scholarly publishing,” he says.