Cheating beyond ChatGPT: Agentic browsers present risks to universities
AI chatbots have proliferated in school settings since the launch of ChatGPT. But OpenAI, the company behind ChatGPT, just released a new AI tool that may make it harder to combat AI-generated assignments with AI detection.
OpenAI’s new browser, Atlas, follows the release of other browsers that incorporate AI technology. Built into these browsers are assistants that operate the browser without keyboard inputs or mouse clicks. That means they can navigate a learning management system (LMS) like Canvas and testing software independently. OpenAI’s announcement of its new product included an endorsement from a college student who found the tool helpful for their learning.
However, as Pangram examines in this story, students and researchers are sounding the alarms that these tools put academic integrity and personal data at risk in classrooms already upended by a rise in AI use.
In online posts, students use these so-called “agentic browsers” to take over academic platforms like Canvas and Coursera and complete quizzes assigned to them. The CEO of Perplexity, the creator of the agentic browser Comet, even responded to a student displaying how they used the tool to complete a quiz, saying, “Absolutely don’t do this.”
These browsers interact with websites at the user’s request to complete tasks such as shopping, web navigation, and form submission. They can even complete schoolwork without a student’s hands touching the keyboard. See an example below.
Carter Schwalb, a senior business analytics major at Bradley University, heads the school’s AI club. He said he’s experimented with agentic browsers for planning trips and apartment searches, as well as for summarizing information from various websites. However, he’s spoken with many professors at his university who report that students are submitting AI-generated responses to their assignments.
“I’ve seen a lot of instances, even from talking to professors, of the students just blatantly submitting ChatGPT-generated responses,” Schwalb said.
For students, agentic browsers offer a new level of convenience, with built-in chatbots and the ability to complete and submit assignments automatically. For teachers who want to combat these issues, reviewing the version history in Google Docs can help determine whether students are using AI assistants to complete and submit entire written works.
Students like Schwalb, though, are refraining from using these tools for hands-free assignment completion. Schwalb said he doesn’t want to lose his critical thinking abilities by offloading all of his work to AI tools.
“I need to keep my ability to critically think and I think that needs to be emphasized, probably both from teachers to their students as well as parents to their children,” Schwalb said.
Not everyone shares Schwalb’s outlook. But agentic browser use raises concerns not only about academic integrity and engagement in education. In a study by the University of California, Davis, Ph.D. student Yash Vekaria and others, researchers concluded that generative AI assistant browser extensions store and share their users’ personal data.
“Sometimes this may involve collecting information and storing information which is sensitive to a user,” Vekaria said.
The study was carried out in late 2024 when agentic browsers were not a part of mainstream AI usage. Starting in May 2025, searches for “AI in browser” and “Comet browser” (the tool created by Perplexity) on Google started to ramp up. However, the conclusions researchers settled on apply to agentic browsers, according to Vekaria. Additionally, he said, agentic browsers may pose greater privacy risks than the study covered.
“The assistant is always present in the side panel, so it’s able to access and view everything that the user is doing,” Vekeria said. “Agentic browsers collect all this information and have, if not similar, at least more risks in my opinion.”
Many students who use agentic browsers for academic or personal tasks don’t understand these risks, Vekaria noted. When used on academic platforms like Canvas, AI assistant tools collected and shared students’ academic records with other sites. Students’ educational records are supposed to be protected under the Family Educational Rights and Privacy Act.
“We saw that it was able to exfiltrate student academic records, which is a risk under FERPA that protects students’ academic data in the U.S.,” Vekaria said. “In general, there should be more regulatory enforcement that should happen.”
However, universities across the nation haven’t demonstrated a cohesive response to the use of these tools by their own students. While AI detectors can assess submitted work by students, multiple-choice tests and discussion forums don’t incorporate these checks. Students are using these tools regardless, and Schwalb argues that restriction is not the answer.
“I haven’t seen a good enough argument against AI to be fully adopted at a university, other than we don’t want kids using it, which is just not reasonable,” Schwalb said. “It’s like the internet coming out and telling somebody not to use the internet or like the Industrial Revolution and telling somebody not to make something on an assembly line.”
As new tools are emerging, the realities for students and professors keep changing. Companies looking to support educational institutions are releasing tools, such as advanced AI detectors, to protect user data that agentic browsers may put at risk.
“The option is here, and students are going to take it,” Schwalb said. “The job is not whether to, but how we restrict this. It’s how do we incorporate.”
This story was produced by Pangram and reviewed and distributed by Stacker.
RELATED CONTENT: Free Speech On Trial: Texas Tech University Student Arrested And Expelled After Outburst At Charlie Kirk Vigil

