Google recently moved to suspend a number of customer accounts after heavy autonomous usage of its Antigravity agent development backend and Gemini services was observed when those services were used through third‑party agent wrappers such as OpenClaw and OpenCode. The suspensions—reported to affect customers from high‑spend AI Ultra subscribers to smaller accounts—have raised immediate concerns among developers who say they were using paid quota and received little or no warning before access was cut. This post summarizes the reported facts, Google’s stated rationale, customer responses, and the broader technical and commercial context.
What was reported to have happened
- Customers using Google’s Antigravity agent platform and Gemini services in combination with third‑party agent harnesses found accounts suspended over recent weeks.
- Some affected users were on Google’s higher‑tier AI Ultra subscription (reported at $250 per month), while others were lower‑spend customers.
- The suspensions were frequently applied with minimal prior notice, according to affected developers who shared their experiences in public discussion threads.
- Google’s stated action targeted use of the Antigravity backend when it appeared to be functioning as a proxy or when usage patterns overwhelmed the system’s compute capacity.
Google’s explanation
- Google representatives acknowledged a “massive increase in malicious usage of the Antigravity backend” that, they say, severely degraded service quality for legitimate users.
- Varun Mohan, identified in reports as a co‑founder of Windsurf and now a DeepMind engineer, explained that Google needed a rapid way to cut off access for users not “using the product as intended,” and that the company planned to restore access for users who fell afoul of the rule unintentionally.
- Google reportedly clarified that the blocks applied to Antigravity specifically and did not constitute a block of other Google services.
Customer reactions and reported grievances
- Some developers disputed Google’s characterization of the usage as “malicious.” They say they consumed paid quota within limits and assumed third‑party wrappers were allowed because the terms did not explicitly ban them.
- One developer quoted in reporting, Mohan Prakash, argued that banning paying customers without warning harms trust and that service providers should return explicit errors when integrations are not permitted.
- The Register (source of the reporting) requested examples from Google illustrating the alleged malicious usage but had not received those examples at the time of publication.
Technical and economic causes reported
- The incidents highlight a mismatch between how cloud AI services are provisioned/priced and how third‑party autonomous agent frameworks use tokens and compute. When a subscription or quota is used as a backend by an orchestration layer designed for high‑throughput autonomous agents, demand can spike beyond anticipated provisioning.
- Reported comparisons were made to similar moves by other AI providers (for example, Anthropic) who have restricted certain subscription uses to prevent token arbitrage or other unintended cost exposures.
- Observers reported that some AI vendors have effectively been selling compute tokens below actual cost to gain adoption, which can leave providers vulnerable to unexpected compute expenses when usage patterns scale rapidly through third‑party tooling.
Implications for developers and teams (based on reported facts)
- Developers running agent frameworks against subscription‑based AI backends should verify whether their usage patterns are supported by the provider’s terms and documented quotas.
- Where possible, teams should look for explicit guidance from a provider about supported integrations or error behaviors if an integration is not permitted, and request clarity from providers when terms are ambiguous.
- Organizations relying on managed AI services for production workloads should monitor for abrupt policy or enforcement changes and consider contingency plans (alternative providers, higher‑guarantee APIs, or enterprise contracts) if uninterrupted access is critical.
What Google and similar providers are reported to be doing
- Google reportedly acted quickly to cut off access to overwhelmed backend capacity in order to protect service quality for other users, while indicating an intention to reinstate access for users inadvertently affected.
- The broader pattern among providers has been to tighten enforcement where third‑party integrations create cost or capacity exposures that the provider did not intend to cover under consumer subscription tiers.
Conclusion
The reported account suspensions around Google’s Antigravity and Gemini services underscore a concrete operational challenge: third‑party agent frameworks can dramatically change token and compute consumption patterns compared with human‑driven usage. Providers are reacting by enforcing limits, sometimes abruptly, and developers are pushing back when those limits are applied without clear prior notice. The immediate facts are straightforward—suspensions occurred, Google attributed them to abusive or unintended usage that degraded service, and some affected users assert they stayed within paid quotas and received little warning. Going forward, clearer provider guidance on permitted integrations and more explicit quota semantics will be important to avoid similar disruptions.
OpenClaw 2026.2.23 — Security-First Upgrade Meets Expanded Multi‑Model AI Support
OpenClaw’s 2026.2.23 release is one of those updates that signals the project…
Guardian of the Red Team: How Guardian Orchestrates Gemini, GPT-4 and 19 Top Security Tools for Smarter Pentesting
Guardian is an open-source, AI-driven penetration testing framework that leverages multiple large…
Claude Cowork finally lands on Windows
Anthropic’s Claude Cowork has arrived on Windows, closing a major platform gap…
Claude Opus 4.6: Anthropic’s powerful model for coding, agents, and enterprise workflows is now available in Microsoft Foundry
Claude Opus 4.6 represents a clear evolution in applying frontier language models…