OpenAI has introduced GPT-5.4-Cyber, a purpose-built variant of GPT-5.4 tuned to assist vetted security professionals with tasks previously reserved for specialized analysts. Rather than a general consumer release, this model is designed to lower refusal rates for legitimate cybersecurity workflows: binary reverse engineering, vulnerability scanning, malware analysis and exploit research. The announcement frames the model as a defensive accelerant —
Tag: cybersecurity
OpenAI Revokes macOS App Certificate After Axios Supply-Chain Compromise
OpenAI has publicly disclosed a supply‑chain incident that affected the signing workflow for its macOS applications and, out of caution, is revoking and rotating the certificate used to notarize those apps. The company’s investigation found that a GitHub Actions workflow used in the macOS signing process pulled a compromised release of the widely used npm library Axios (version 1.14.1). Although
Anthropic Withholds Mythos Preview: Too Potent a Cyber Threat to Release
Anthropic’s decision to withhold the Claude Mythos Preview has punctured the usual celebratory arc of model announcements. Rather than rushing to commercialize another frontier AI, the company says Mythos demonstrated capabilities that could be exploited to find and chain high-severity vulnerabilities in widely used systems—so serious that Anthropic is choosing limited, defensive deployment over general release. A startling discovery in
Microsoft Links Medusa Ransomware Affiliate to Zero-Day Exploitation Campaign
Microsoft’s recent analysis tying a Medusa ransomware affiliate to a campaign that leveraged zero-day vulnerabilities has put a renewed spotlight on the evolving tactics of extortion groups and the threat posed by previously unknown software flaws. For security teams and executives, the announcement is a reminder that threat actors are combining rapid vulnerability exploitation with tried-and-true ransomware playbooks to increase
Microsoft strips EXIF metadata from Teams images to protect employee privacy
On March 2026’s feature rollout, Microsoft updated Teams to automatically remove EXIF metadata from images shared in chats and channels. The change aims to prevent accidental leaks of GPS coordinates, device details, and time stamps—data that can be exploited for targeted attacks or unwanted location disclosure. The move is part of a broader push to bake privacy and security into
AI as Tradecraft: How Threat Actors Operationalize Artificial Intelligence
Organizations are facing a subtle but powerful shift: adversaries are not inventing wholly new attacks so much as adopting artificial intelligence to make existing tradecraft faster, cheaper, and more resilient. Microsoft’s threat intelligence and other industry observers show that generative AI is being embedded across the attack lifecycle to accelerate reconnaissance, scale social engineering, and shorten the time between detection





