Tech companies don’t care that students use AI agents to cheat


AI companies realize that children are the future of their business model. The industry does not hide its attempts to attract young people to its products through timely promotions, discounts and referral programs. “We’re here to help you through the final exams,” OpenAI said during one of the giveaways. ChatGPT Plus for college students. Students get free one-year access to Google‘sand ConfusionAI products are expensive. Bewilderment even pays referrers $20 Every American student can download the Comet AI browser.

Popularity of artificial intelligence tools within teens He is an astronomer. Once a product makes its way through the education system, it’s the teachers and students who are stuck facing the repercussions; Teachers are struggling to keep up with the new ways their students are gaming the system, as are their students at stake Not learning how to learn at all, Educators warn.

This is becoming more automated with the latest AI technology, AI agents, who can complete online tasks for you. (Even slowlylike Edge It has been seen in tests Several agents On the market.) These tools make things worse by making it easier to cheat. Meanwhile, tech companies are hot on their heels in taking responsibility for how their tools are used, often blaming only the students they have enabled with a seemingly unstoppable cheating machine.

Confusion actually seems to be based on its reputation as a cheating tool. issued a Facebook ad In early October, it showed a “student” discussing how his “peers” were using Comet’s AI agent to do their multiple-choice homework. In another ad It was posted the same day on the company’s Instagram pagea representative tells students that the browser can run tests for them. “But I’m not the one telling you that,” she says. When a video of a Perplexity agent completing someone’s homework online — the exact use case in the company’s ads — appeared on X, Aravind Srinivas, CEO of Perplexity, reposted the videohe joked, “Don’t ever do that.”

when Edge Asked for a response to concerns about Perplexity’s AI agents being used to cheat, spokesperson Bijoli Shah said, “Every educational tool since the abacus has been used to cheat. What generations of sages have known since then is that cheaters in school ultimately only cheat themselves.”

This fall, shortly after Artificial intelligence industry agent summerteachers began posting videos of these AI agents seamlessly saving assignments in their online classrooms: OpenAI’s ChatGPT proxy Generate and submit an essay On Canvas, a popular learning management dashboard; Perplexity’s AI assistant successfully completed the test And generating Short article.

in Another videoa ChatGPT agent pretends to be a student on an assignment aimed at helping classmates get to know each other. “He actually introduced himself as me… so that kind of blew my mind,” said video designer Yoon Moh, a university instructional designer. Edge.

Canvas is the parent company’s flagship product directionswhich claims to have tens of millions of users, including those in “every Ivy League school” and “40% of K-12 school districts in the United States.” Moh wanted the company to prevent AI agents from pretending to be students. He asked Instructure on its Community Ideas forum and emailed the company’s sales representative, citing concerns about “potential misuse by students.” He included a video of the agent doing Moe’s fake homework for him.

It took nearly a month for Moh to hear from Instructure’s executive team. On the topic of banning AI agents from their platform, they seem to point out that this is not a problem of a technical solution, but rather a philosophical problem, and in any case, it should not stand in the way of progress:

“We believe that instead of just banning AI altogether, we want to create new pedagogically sound ways of using technology that actually prevents cheating and creates greater transparency in how students use it.

“So, while we will always support the work of preventing cheating and protecting academic integrity, such as the work of our partners on browser blocking, monitoring, and cheating detection, we will not shy away from building powerful, transformative tools that can open up new ways of teaching and learning. The future of education is too important to be disrupted by fear of misuse.”

Education was more direct with Edge: Although the company has some guardrails that check certain third-party access, Instructure says it cannot prevent third-party AI agents and their unauthorized use. Instructure spokesman Brian Watkins said that Instructure “will never be able to completely reject AI agents,” and cannot control “tools that run locally on a student’s device,” explaining that the issue of student cheating is, at least in part, technological.

Mohamed’s team also struggled. IT professionals have tried to find ways to detect and block agent behaviors, such as submitting multiple tasks and tests too quickly, but AI agents can change their behavioral patterns, making identification “extremely elusive,” Moh said. Edge.

In September, two months later Instructure has signed a deal with OpenAIOne month after Muhammad’s request, the instructions were aligned against A different AI tool that teachers said helped students cheat, e.g The Washington Post I mentioned. Google’s Homework Help button in Chrome has made it easier to do an image search for any part of anything on the browser — like a test question On canvasas one of my math teachers demonstrated – through Google Lens. Educators Raised the alarm In the Instructure community forum. Google listened, Watkins said, according to a forum response from the Instructure community team, an example of a “long-term partnership” between the two companies that includes “regular discussions” about education technology. Edge.

When asked, Google confirmed that the Homework Help button was just a test of the Lens shortcut, a pre-existing feature. “Students told us they value tools that help them learn and understand things visually, so we ran tests that offer an easier way to access Lens while browsing,” said Google spokesman Craig Ewer. Edge. The company has paused testing of the shortcut to incorporate early user feedback.

Google is leaving open the possibility of future Lens/Chrome shortcuts, which is hard to imagine wont To be marketed to students due to its recent presence Company blogwritten by an intern, declaring: “Google Lens in Chrome is a lifesaver for school.”

Some teachers have found that agents may sometimes, but inconsistently, Refusing to complete academic assignments. But that barrier was easy to overcome, as Anna Mills, an English professor at the college, showed through instruction OpenAI’s Atlas Browser for sending tasks without asking for permission. “It’s the Wild West,” Mills said. Edge On the use of artificial intelligence in higher education.

That’s why gurus like Moh and Mills want AI companies to do just that takes Responsibility for their products, not blaming students for using them. The Modern Language Association’s Artificial Intelligence Task Force, which includes Mills, Issue a statement In October, he called on companies to give teachers the ability to control how AI agents and other tools are used in their classrooms.

It seems that OpenAI wants to distance itself from cheating while preserving the future of AI-powered education. In July, the company added A Study mode for ChatGPT Which doesn’t provide answers, and OpenAI’s Vice President of Education, Leah Belsky, He said Business insider That AI should not be used as an “answer machine.” Belsky said Edge:

“The role of education has always been to prepare young people to succeed in the world they will inherit. This world now includes powerful artificial intelligence that will shape how work gets done, what skills matter, and what opportunities are available. Our shared responsibility as an education ecosystem is to help students use these tools well – to enhance learning, not sabotage it – and to reimagine how teaching, learning and assessment work in a world using AI.”

At the same time, Instructure is moving away from trying to “monitor tools,” Watkins emphasized. Instead, the company claims to be on a mission to “redefine the learning experience itself.” Presumably this vision does not include persistent cheating, but the proposed solution is similar to OpenAI’s: a “collaborative effort” between companies that create AI tools and the institutions that use them, as well as educators and students, to “define responsible use of AI.” This is a work in progress.

Ultimately, the application of any guidelines for the ethical use of AI that ultimately arrives in committees, at think tanks, and on corporate boards will fall to teachers in their classrooms. Products were released and deals were signed before These guidelines have been established. Apparently, there is no going back.

Follow topics and authors From this story to see more like this in your personalized homepage feed and receive email updates.




Leave a Reply

Your email address will not be published. Required fields are marked *