Hackers used AI to build a working cyberattack for the first time

Security researchers have been warning for years that AI would eventually make its way into the hands of hackers looking to build more sophisticated attacks. Google’s Threat Intelligence Group (GTIG) just confirmed it’s no longer a warning. The group says it has discovered the first known AI-developed exploit used in a real-world attack campaign.

The exploit targeted a popular open-source web-based administration tool. Specifically, it was a Python script that let attackers bypass two-factor authentication once they had valid login credentials. A group of prominent cybercriminals built it and planned to use it in a mass exploitation campaign, but GTIG caught it first, alerted the vendor, and a patch went out before any damage could be done.

Google says it doesn’t believe its Gemini AI was involved. But the code itself gave the AI connection away. The script was full of detailed educational comments and used a clean, textbook-style Python format. It even included a hallucinated severity score. Those are all telltale signs of large language model output. GTIG said it has “high confidence” that an AI model helped discover and weaponize the vulnerability.

Why This Is a Big Deal

The type of flaw the attackers found is exactly the kind AI models are good at spotting. It was a semantic logic error. A developer had hardcoded a trust assumption that contradicted the app’s actual authentication logic. Traditional security scanners tend to miss this kind of issue. AI, with its ability to read developer intent and identify contradictions in high-level logic, can catch what automated tools overlook.

The report also flagged broader state-sponsored activity. North Korean group APT45 used AI to run thousands of repetitive prompts to validate exploits at scale. A China-linked group attempted to jailbreak Gemini using a fake “senior security auditor” persona to probe router firmware for vulnerabilities. This AI-developed exploit case is one data point in a much larger picture.

Google says it’s using AI on the defensive side too, including its Big Sleep vulnerability discovery agent and a patching tool called CodeMender.

Leave a Reply

Your email address will not be published. Required fields are marked *