A new class of prompt-injection attacks—dubbed “Comment and Control”—turns GitHub pull requests, issues, and comments into attack surfaces that can hijack AI coding agents and siphon secrets directly from CI/CD environments. Unlike classic prompt injection that waits for a user to feed a document to an agent, this pattern is proactive: opening a PR or posting an issue can automatically
