Since the release of the chatbot ChatGPT in late 2022, there has been frantic debate at universities about artificial intelligence (AI). These conversations have centred on undergraduate teaching — how to prevent cheating and how to use AI to improve learning. But a quieter, deeper disruption is unfolding in research, the other core activity of universities.
A doctoral education has long been seen as the pinnacle of academic training, an apprenticeship in original thinking, critical analysis and independent enquiry. However, that model is now under pressure. AI is not just another research tool; it is redefining what research is, how it is done and what counts as an original contribution.
How money, politics and technology are redefining the PhD experience
Universities are mostly unprepared for the scale of disruption, with few having comprehensive governance strategies. Many academics remain focused on the failings of early generative AI tools, such as hallucinations (confidently stated but false information), inconsistencies and superficial responses. But AI models that were clumsy in 2023 are becoming increasingly fluent and accurate.
AI tools can already draft literature reviews, write sophisticated code with human guidance and even generate hypotheses when provided with data sets. ‘Agentic’ AI systems that can set their own sub-goals, coordinate tasks and learn from feedback represent another leap forwards. If the current trajectory continues, we’re fast approaching a moment when much of the conventional PhD workflow can be completed, or at least be heavily supported, by machines.
Unanswered questions
This shift poses challenges for educators. What constitutes an original contribution becomes unclear when AI tools produce literature reviews, acquire and analyse data, and draft thesis chapters. Students might need to pivot from executing research tasks to framing questions and interrogating AI outputs.
To explore what the near future of research training might look like, I conducted a role play simulating a PhD student working with a hypothetical AI assistant. I used Claude, a leading AI system built by the firm Anthropic in San Francisco, California.
‘Science saved my life’ — and it must save other at-risk scholars
I fed the chatbot a detailed prompt (see Supplementary Information) describing a fictional AI research assistant called HALe — inspired by the AI character HAL 9000 from the science-fiction film 2001: A Space Odyssey. I gave HALe capabilities that are already under development and are likely to improve in coming years. These include accessing external databases, integrating environmental and biological data, and performing advanced analyses autonomously. I then played the part of the student, asking questions and responding to the chatbot’s replies. The dialogue was generated in a single, unedited session — offering a fictional, yet plausible, glimpse of how future doctoral research could unfold.
The simulated goal was to complete a PhD project investigating how extreme ocean temperatures affect marine species — an ambitious task involving data synthesis, statistical modelling and writing a paper for publication. In this fictional scenario, HALe didn’t merely assist; it took initiative. It searched and extracted data from scientific literature, identified knowledge gaps, harmonized environmental and biological data sets, ran complex statistical analyses, interpreted the results, drafted a manuscript, suggested peer reviewers and even created an open-access data repository. The entire process, which would realistically take a student several months, played out in a short sequence of guided exchanges that might occupy just a few hours.
What we stand to lose when foreign students are seen as a threat
Although today’s AI models cannot yet perform these tasks with anything approaching full autonomy, the simulation was grounded in what current systems can already do with human guidance. For example, ChatGPT, Claude and other state-of the-art chatbots can draft credible literature reviews, propose hypotheses, suggest analytical approaches and generate code that — when reviewed and validated by a human — can process real data sets and produce meaningful outputs. They can even help to interpret statistical results and visualize findings. What struck me, while conducting this exercise, was how much of the conventional PhD process could now be driven and accelerated by AI. At times, it felt like working with a hyper-competent and astonishingly rapid research assistant. It was both exciting and unsettling.
Of course, this simulation reflects a particular kind of project — analytical, data-rich and computational in nature. Experimental or field-based PhD programmes, especially those that require collecting samples, laboratory work or interacting with other people or with the natural world, will remain less susceptible to full automation. But even in these areas of science, AI is likely to play a growing part in experimental design, autonomous data collection, literature synthesis and post-experiment analysis.
New skills
This experience brought home how training in academic skills will need to be fundamentally reconsidered in an era of AI.




