Wednesday, February 4, 2026
No menu items!
HomeNatureAuthor knows best? Top AI conference asks for self-ranked papers amid paper...

Author knows best? Top AI conference asks for self-ranked papers amid paper deluge

Bundles of many paper documents stacked together in tall piles.

Credit: Artem Cherednik/iStock via Getty

The number of papers submitted to the top artificial-intelligence conferences has shot up, with some events seeing numbers rise more than tenfold over the past decade.

This is not just a side effect of the rapid increase in global AI research output, says Buxin Su, a mathematician at the University of Pennsylvania in Philadelphia. AI conferences tend to attract multiple submissions from the same author, which poses a serious problem: who’s going to sort through them all to find the most exciting and high-quality work?

In a study posted on the preprint server arXiv in October1, Su and his colleagues describe a system that requires authors making more than one submission to directly compare their papers, ranking them by quality and potential impact. The rankings are calibrated against the assessments of peer reviewers (who do not see the self-ranking information) to highlight the top picks.

The approach was tested on 2,592 papers submitted by 1,342 researchers to the 2023 International Conference on Machine Learning (ICML), a leading AI event that was held in Honolulu, Hawaii. Sixteen months after the conference, Su and his team assessed each paper’s real-world impact by tracking citation counts and comparing these to the calibrated peer-review scores.

Su says that the self-rankings and calibrated scores provide important insights that could be used to predict a paper’s future performance. For example, the team found that papers with the highest self-rankings went on to receive twice as many citations as those with the lowest ones. “The authors’ rankings are a very good predictor of the long-term impact,” says Su. “The calibrated scores better reflect the true quality.”

ICML 2026, which is due to be held in Seoul later this year, will be the first event to formally adopt the self-ranking methodology, says Su, who is a member of the conference’s integrity committee.

He adds that although the method could be used at any research conference, it would be particularly useful for AI conferences because of the large number of multiple submissions they attract. His study found that in more than three‑quarters of ICML 2023 submissions, at least one author had more than one paper submitted to the conference. AI conferences are also being inundated with AI-generated papers, which is putting further pressure on those who have to sort and review them.

An honest assessment?

Self-rankings are “a really novel and really cool idea”, says Nihar Shah, a computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania. But he is not convinced that authors can accurately predict which of their papers will have the most impact. “The claim that authors have better predictive power than reviewers could be an artefact of [Su and colleagues’] methodology, rather than what’s actually happening in the real world,” he says.

That said, Shah, who served as scientific integrity chair at ICML 2025 in Vancouver, Canada, welcomes any efforts to address the surge in submissions to AI conferences. “That’s absolutely a problem that we don’t really know how to solve at this point.”

Emma Pierson, a computer scientist at the University of California, Berkeley, agrees that self-rankings are an “exciting possibility”. “You know which papers are your ‘baby’, which papers you really love,” she says. “I think the author’s own self-ranking would be one valuable source of input if you can get them to honestly provide it.”

Both Shah and Pierson worry that researchers might deliberately try to game the system, however — for example, by giving their weaker papers higher rankings in an effort to balance out critical referee comments.

RELATED ARTICLES

Most Popular

Recent Comments