What makes a great university? Academics say they value seeing their efforts recognized, witnessing the impact of their work on students and society, and working in a collegial environment. In my studies as a research fellow in educational psychology, I’ve found that students expect more than just a sound education — they want friendship with other students, emotional growth and skills that will help them to secure jobs. None of these factors can be adequately assessed using the dominant measure of success in academia: publications.
Publishing well-cited papers in high-impact journals is central to researchers’ career progression. This results in intense pressure to perform well in this one area, often at the expense of other scholarly activities. Although citation metrics can indicate expertise, they can also amplify power imbalances, because highly cited scholars attract ever-more citations and influence.
The postdoc experience is broken. Funders such as the NIH must help to reimagine it
Efforts to change evaluation systems have gained momentum over the past decade. The San Francisco Declaration on Research Assessment — signed by almost 25,000 scientists and organizations — and the Leiden Manifesto for research metrics both advocate for research to be evaluated beyond impact factors. A range of metrics, from policy citations to social-media engagement, can be tracked using online tools. Yet the citation-based h-index remains one of the most predominant ways to evaluate academic impact.
In my view, we could make academic life more rewarding by systematically scoring the underappreciated behaviours and conduct that researchers value. The product might be called the ‘G+ index’ — G representing generosity, giving and other ‘good things’ in academia. This index could either stand alone or complement existing citation measures.
For senior academics, mentorship could be judged in terms of the number of first- or last-author papers that are co-authored by early-career researchers, signalling investment in developing others’ careers. To account for varying team sizes, formal acknowledgements of trainees in publications could also be factored in. Early-career researchers could be exempt until a set career stage.
Measuring societal impact: how to go beyond standard publication metrics
Collaboration scores could measure co-authorships across institutions, countries and disciplines. They could include co-authorship with policymakers, community organizations and industry partners, and with individuals with lived experiences relevant to the research. Care would be needed to avoid disadvantaging scholars with limited resources, who might be less able to make connections.
The index could reflect good science practices — for instance, by recognizing replication studies. And public reach could be measured in terms of open-science practices — data-sharing, pre-registering protocols and communicating through podcasts, blogs and media contributions.
In some fields, publishing in practice-based journals could be rewarded to ensure that research reaches professional communities. Similarly, editorial and peer-review work for these journals — and work for scientific societies, regardless of prestige — could be valued.
Ultimately, the full list of metrics should be decided by the academic community, reflecting a collective vision for a healthier workplace. Funding bodies and institutions could then incorporate the index into their evaluation criteria, providing both the infrastructure and incentives for adoption. Many of the metrics are already tracked in existing data repositories, but improved systems to collect and amalgamate the data are needed.