Google has halted a zero-day hack that it says was developed using artificial intelligence


For the first time, Google says it detected and stopped a zero-day vulnerability that was developed using artificial intelligence. According to a report from Google Threat Intelligence Suite (GTIG), “prominent cybercrime threat actors” were planning to use the vulnerability in a “mass exploitation event” that would have allowed them to bypass two-factor authentication on an unnamed “open source, web-based system administration tool.”

Google researchers found hints in the Python script used for the exploit that suggest help from the AI, such as a “hallucinated CVSS score” and a “structured textbook” format consistent with LLM training data. This exploit takes advantage of a “high-level semantic flaw where the developer has hard-coded an assumption of trust” into the platform’s two-factor authentication system. This comes after weeks of concern about the capabilities of artificial intelligence models focused on cybersecurity Like the myths of Anthropistsand A A security vulnerability was recently revealed in Linux Which were discovered with the help of artificial intelligence.

This is the first time Google has found evidence of AI involvement in an attack like this, although Google researchers noted that they “do not believe Gemini was used.” Google says it was able to “disable” this particular exploit, but it also says hackers are increasingly using artificial intelligence to find and take advantage of vulnerabilities. The report also points to AI as a target for attackers, saying: “GTIG notes that adversaries are increasingly targeting the integrated components that give AI systems their utility, such as autonomous skills and third-party data connectors.”

Google’s report also details how hackers are using “character-based jailbreaking” to make AI find vulnerabilities for them, such as an example directing the AI ​​to pretend to be a security expert. Hackers are also feeding AI models with entire repositories of vulnerability data, and are using OpenClaw in ways that indicate “interest in optimizing AI-driven payloads under controlled settings to increase exploitation reliability before deployment.”

Leave a Reply

Your email address will not be published. Required fields are marked *