Google’s threat hunters have flagged a troubling milestone: the first known instance of a zero-day exploit likely discovered and weaponized using an artificial intelligence model. What began as an obscure Python script has been linked to a coordinated effort by cybercriminals to develop a two-factor authentication (2FA) bypass that could be scaled for mass exploitation. The disclosure underscores how AI is reshaping offensive cyber operations and compressing the timeline from discovery to active abuse.
What the flaw looked like
Google’s Threat Intelligence Group (GTIG) analyzed a Python-based exploit that implements a logic flaw enabling a 2FA bypass on a popular open-source, web-based system administration tool. The company did not name the impacted product, but emphasized that exploitation still required valid user credentials. GTIG characterized the underlying root cause as a high-level semantic logic error resulting from a hard-coded trust assumption — the type of subtle design weakness that LLMs are increasingly adept at spotting.
How GTIG linked the exploit to AI
GTIG said it assessed with high confidence that an AI model was used to discover and develop the exploit. The evidence included telltale signatures of LLM-generated code: copious educational docstrings, a hallucinated CVSS score, textbook-style Python formatting, detailed help menus, and other artifacts mirrored in model training data. Those patterns, combined with the nature of the semantic flaw, led analysts to conclude the workflow was likely accelerated by an LLM.
Why this matters for defenders
Security experts warn this is not an isolated phenomenon. Ryan Dewhurst, head of threat intelligence at watchTowr, told The Hacker News that AI is already accelerating vulnerability discovery, validation, and weaponization — compressing timelines defenders once relied upon to respond. The implication: defenders must assume attackers will increasingly leverage AI to find logic errors and generate exploit code rapidly, leaving less time for detection and patching.
PromptSpy and other AI-enabled threats
Google’s disclosure also revisited PromptSpy, an Android backdoor that abuses AI to analyze device screens and direct on-device actions. PromptSpy can navigate the Android UI, monitor user activity with an autonomous agent module, and capture biometric data to replay authentication gestures. It uses trickery, such as an overlay to block uninstall taps, and is designed for operational resilience: Gemini API keys and VNC relay servers used by the malware can be updated dynamically via its command-and-control channel. Google said it disabled assets tied to PromptSpy and that no infected apps were found on the Play Store.
A growing toolkit of AI abuse
- Nation‑linked actors prompting models to act as security experts and validate exploits.
- Repetitive, automated prompting to recursively analyze CVEs and test proof-of-concept exploits.
- The use of agentic platforms like Hexstrike AI and Strix to automate discovery with minimal human oversight.
- LLM fine-tuning via curated vulnerability repositories and specialized plugins that prime models to behave like seasoned auditors.
Shadow APIs and monetized misuse
Researchers have also identified an ecosystem that helps scale illicit AI access. Shadow APIs and proxy relay services have been observed providing unauthorized access to commercial models, often advertised on local marketplaces. Academic research has shown that these services can suffer model substitution and accuracy degradation on high-risk benchmarks, and can capture prompts and responses — a data treasure trove that could be reused for illicit model fine-tuning or knowledge distillation.
Broader operational concerns
Beyond vulnerability hunting, attackers are exploiting AI environments themselves. Compromised AI infrastructure can become a vector for supply chain attacks, allowing adversaries to identify and exfiltrate sensitive information or perform reconnaissance at scale. GTIG highlighted incidents where adversaries used publicly available scripts and tooling to register and abuse premium-tier model access, effectively subsidizing large-scale misuse by rotating accounts and infrastructure.
What organizations should take from this
The GTIG report illustrates a simple but stark truth: AI has become a force multiplier for attackers. While Google did not implicate its Gemini model directly in the zero-day case, the broader trend is clear — AI tools reduce the manual effort and expertise required to find and weaponize complex logic flaws. Organizations should therefore treat AI-assisted attack techniques as a present and growing risk, prioritizing rapid patching, resilient authentication mechanisms, and threat-hunting capabilities that account for automated, high-volume reconnaissance.
Conclusion
The discovery of an AI-assisted zero-day 2FA bypass marks a watershed moment in offensive cyber operations. From LLM-crafted exploit code to agentic malware and shadow APIs, the tools available to attackers are evolving quickly. For defenders, adapting to this new reality means rethinking timelines, beefing up detection and response, and treating AI-enabled misuse as a systemic threat rather than a theoretical future risk.
109 Fake GitHub Repositories Used to Deliver SmartLoader and StealC Malware
A large-scale campaign recently uncovered shows how attackers abused the trust developers…
RedSun: New Microsoft Defender Zero-Day Lets Unprivileged Users Gain SYSTEM Access
A freshly disclosed zero-day vulnerability in Microsoft Defender, dubbed "RedSun," has raised…
Microsoft Patch Tuesday — April 2026: 168 Vulnerabilities Fixed, Including an Actively Exploited SharePoint Zero-Day
Microsoft’s April 2026 Patch Tuesday delivers a heavy set of fixes: 168…
cPanel compromise: CVE-2026-41940 and the Filemanager backdoor
A critical cPanel/WebHost Manager flaw tracked as CVE-2026-41940 is being actively exploited…