OpenAI Codex Command-Injection Flaw: How GitHub Tokens Were Exposed and What Teams Must Do Now

OpenAI Codex Command-Injection Flaw: How GitHub Tokens Were Exposed and What Teams Must Do Now

The rise of AI coding assistants has simplified developer workflows, but a recent discovery shows those conveniences can carry serious risk. Researchers at BeyondTrust found a critical command-injection vulnerability in OpenAI Codex that could be exploited to steal GitHub access tokens. The flaw demonstrates how an overlooked parsing detail — a branch name passed into a container setup script —