
Using generative artificial intelligence models to kick ideas around is more effective than outsourcing thinking wholesale, says Yanjun Shen.Credit: jabkitticha/iStock via Getty
Last May, I received a PhD research proposal that, on the surface, seemed flawless: the prose flowed effortlessly, the logic was airtight and the citations were meticulously organized and arranged.
As an interdisciplinary researcher in geology and ecology investigating the relationships between rocks, soil, water and vegetation, my laboratory is usually a place of physical observations and complex modelling. The proposal suggested a project looking at how often roots form in various kinds of rock.
However, my good impression of the proposal was shattered when I asked the prospective student about a specific detail about how the experiment would work, to assess their understanding of the underlying science. Their eyes darted away and silence filled the room. I came to the sobering realization that this polished document was not the result of deep critical thinking and a solid understanding of the subject matter, but rather the articulate, yet ultimately mediocre, output of an artificial-intelligence model.
After some of my own prompting, the student confessed they had used AI tools to put together the research proposal, and that they hadn’t thought deeply about the topic or taken on a literature review. Instead, the observations they had made during earlier fieldwork, together with research objectives and working hypotheses, were simply fed into an AI model to generate the proposal.
The student argued that AI tools could greatly improve their efficiency and help them to discover concepts that they were unfamiliar with. They saw absolutely no issue with what they had done. To them, the AI model was an available and powerful tool. They thought that the AI-generated version of the proposal that they submitted to me was “flawless”.
My role as the associate dean for graduate education at Chang’an University in Xi’an, China, gives me a bird’s-eye view of the situation that many universities and educators face now that AI use is commonplace. My sense, based on my own experiences and those of colleagues, is that many academics now spend more time attempting to identify when and how their students have used AI than they do teaching. The illusion of competence in students has become an epidemic — a universal challenge that is reshaping the global higher-education and research landscape.
My field of ecological geology demands a meticulous understanding of physical mechanisms. Relying on ‘fast-food’ knowledge — the kind of superficial information that generative-AI chatbots generally provide — can be fatal to a career. My fear is that students are losing the patience required to track the evolution of an academic idea and to verify the physical consistency of a claim.
For example, the proposal I received drew the generic conclusion that a higher density of fractures in the rock would boost water uptake in roots, but it overlooked the complexity associated with distinct soil types and landscapes. These are crucial lithological differences that I stress in my teaching. Such complexities shouldn’t be ignored or glossed over — they are the reason that research in my field fills a lifelong career, not just a topic that could be covered in one lecture. I teach students to ground all of their claims in field observations and to find out how academic theories have evolved over time. The information delivered by AI tools fails to incorporate the specific, context-dependent rules of ecological geology.
Guiding independent thinking
The exchange I had with this student changed how I think about my teaching and mentoring.
AI technology is now widely adopted across many sectors. In my view, those of us engaged in scientific research have moved past learning how to use the tools and are now falling into a period of over-reliance on them. As a mentor, I cannot simply ban AI systems. Instead, I can encourage my students to use them less as a ghostwriter and more as a critical-thinking partner.
I’ve started implementing a practice that I’ve named the reverse cognitive reconstruction protocol (RCRP) to guide students to think independently. The RCRP has two key steps: a collaborative debate between a student and an AI chatbot, and a process-based review conducted by their mentor.
In the first step, I ask my students to do a background search of the literature and form some preliminary hypotheses about their research topic on their own, with no help from AI tools. Once they have developed a clear hypothesis, they have an in-depth discussion with a chatbot: they ask the AI model to use basic geological concepts to refute the theories that they have come up with, and then evaluate its answers using real empirical data. Finally, they test the AI tool’s reliability by putting forward counter-intuitive hypotheses and examining how effectively they can identify the tool’s errors. In other words, they see how much they can prove it wrong.

