Project Glasswing: Anthropic’s Claude Mythos Preview Arms Defenders to Secure Critical Infrastructure

Project Glasswing: Anthropic’s Claude Mythos Preview Arms Defenders to Secure Critical Infrastructure

When Anthropic announced Project Glasswing, it felt like a turning point in how we think about cybersecurity. Rather than another incremental tool, Glasswing pools one of the most capable frontier language models—Claude Mythos Preview—with an unusual, urgent mission: give the organizations that run the internet and financial systems a head start against AI-enabled attackers. The initiative reads like a playbook

Anthropic Withholds Mythos Preview: Too Potent a Cyber Threat to Release

Anthropic Withholds Mythos Preview: Too Potent a Cyber Threat to Release

Anthropic’s decision to withhold the Claude Mythos Preview has punctured the usual celebratory arc of model announcements. Rather than rushing to commercialize another frontier AI, the company says Mythos demonstrated capabilities that could be exploited to find and chain high-severity vulnerabilities in widely used systems—so serious that Anthropic is choosing limited, defensive deployment over general release. A startling discovery in

Anthropic’s Claude Leak: 8,000 Takedown Requests After an Accidental Source-Code Exposure

Anthropic’s Claude Leak: 8,000 Takedown Requests After an Accidental Source-Code Exposure

Anthropic has scrambled to contain the fallout after an accidental exposure of the complete source code for its Claude family of AI tools. The company issued roughly 8,000 copyright takedown requests to remove copies and adaptations circulating on code-hosting sites and mirrors, responding to a wave of reposts and forks that appeared within hours of the initial disclosure. Although Anthropic

Inside the Claude Code Leak: What Anthropic’s Accidental Release Reveals

Inside the Claude Code Leak: What Anthropic’s Accidental Release Reveals

Anthropic, the AI company behind the Claude family of agents, suffered an unexpected exposure that rippled across the developer community and the wider AI market. Earlier today, a sizable JavaScript source map file—bundled with a public npm release—made internal implementation details of Claude Code visible to anyone who downloaded it. What began as a packaging mistake quickly became a public

Anthropic’s Mythos Rocks Cybersecurity Stocks: What Investors and Defenders Need to Know

Anthropic’s Mythos Rocks Cybersecurity Stocks: What Investors and Defenders Need to Know

News moved fast one Friday: a new, exceptionally powerful AI model from Anthropic—codenamed Mythos—leaked into the market conversation, and the ripple effects were immediate. Stocks tied to cybersecurity fell sharply, reflecting a fresh wave of anxiety: if an AI can find complex code flaws faster than teams of humans, what becomes of the companies that sell digital defenses? This post

Anthropic’s Claude Mythos Leak: When Pre-Release Secrets Meet Cybersecurity Risk

Anthropic’s Claude Mythos Leak: When Pre-Release Secrets Meet Cybersecurity Risk

Anthropic recently found itself at the center of an avoidable but consequential security incident: leaked internal drafts revealing the existence of an unreleased, high-capability model called “Claude Mythos.” The exposure—rooted in an unsecured, publicly searchable data cache—pulled back the curtain on product plans, internal risk assessments, and even references to an exclusive executive event. For organizations building powerful AI, the