A critical Broken Object Level Authorization (BOLA) vulnerability in Lovable, an AI-powered app builder, has reportedly left thousands of legacy projects accessible to unauthorized users. According to security researchers, an API endpoint returned full project data β including source code, database credentials, AI chat histories, and customer information β for projects created before November 2025. While Lovable appears to have patched newly created projects, the persistent exposure of older projects has created a significant risk window for early adopters and organizations that built production applications on the platform.
What the reports say
Researchers monitoring the issue identified an API endpoint, https://api.lovable.dev/GetProjectMessagesOutputBody, that allegedly returned JSON responses containing message histories, internal AI thinking logs, tool-use records, and user identifiers without enforcing proper object-level authorization. The flaw, classified as BOLA, enables a low-privilege or free-tier account to retrieve sensitive objects belonging to other users simply by calling the endpoint and changing identifiers. The flaw type is recognized by OWASP as among the most prevalent and dangerous API security issues.
Scope and notable exposures
Multiple examples uncovered during public analysis demonstrate the seriousness of the exposure. One affected project reportedly belonged to the nonprofit Connected Women in AI and contained Supabase database credentials alongside real user data. Researchers also reported finding records linked to individuals at organizations such as Accenture Denmark and Copenhagen Business School. Employees from large technology firms β including Nvidia, Microsoft, Uber, and Spotify β were named among users with affected projects, raising the possibility that internal development artifacts or proprietary code may have been exposed.
Timeline and vendor handling
The vulnerability was reportedly submitted to Lovable via HackerOne roughly 48 days before public disclosure. That submission was reportedly marked a duplicate of an earlier report (labeled as report #3583821) and described as βInformative,β suggesting the platform had prior awareness of similar issues. Lovable is said to have applied a fix for newly created projects after the disclosure, but legacy projects created before November 2025 remain exposed, leaving a substantial population of projects that may still be vulnerable.
How the vulnerability works (technical overview)
A BOLA condition occurs when an API does not verify that the authenticated or calling user actually owns or is authorized to access the object referenced in the request. In this case, the endpoint returned sensitive project artifacts regardless of whether the requesting account had permission to view them. The exposed JSON reportedly included AI internal logs and session content that were never intended to be public, increasing the risk that proprietary models, prompts, or secrets embedded in chat contexts could be harvested.
Immediate steps for affected users
- Immediately rotate any API keys, database credentials, or other secrets stored in affected projects.
- Revoke tokens and regenerate connection strings for services (e.g., Supabase) referenced by those projects.
- Assume chat histories and source code may have been accessed; audit logs and access records for suspicious activity.
- Conduct a secrets sweep across repositories and project artifacts β remove hard-coded credentials and move secrets to managed secrets stores.
- Notify affected stakeholders and users where required by policy or regulation, and consider engaging incident response or legal counsel if sensitive personal data or corporate intellectual property was exposed.
Broader lessons for AI-native development
This incident highlights a recurring challenge in AI-first and low-code platforms: rapid feature rollout can outpace the implementation of robust security controls, leaving legacy objects and data at risk. Organizations building production software on third-party AI builders should treat the platform as an untrusted runtime for secrets. Best practices include independent secrets management, infrastructure-level access controls, periodic API security reviews, and avoiding embedding credentials or sensitive PII directly in model prompts or chat histories.
Closing thoughts
The Lovable incident is a reminder that convenience and speed in AI-enabled application development must be balanced with disciplined security hygiene. Early adopters can be especially vulnerable when platforms evolve their APIs and authorization models over time. For now, affected users should act quickly to rotate secrets and audit their projects; platform vendors must ensure fixes cover legacy data and communicate transparently about risk and remediation steps.
Anthropicβs Claude Mythos Leak: When Pre-Release Secrets Meet Cybersecurity Risk
Anthropic recently found itself at the center of an avoidable but consequentialβ¦
New RDP Alert After April 2026 Security Update Warns of Unknown Connections
Microsoftβs April 2026 Patch Tuesday introduced a small-looking but important change toβ¦
Recently Leaked Windows Zero-Days Now Being Actively Exploited: What You Need to Know
Threat actors have begun abusing three recently disclosed Windows vulnerabilities to escalateβ¦
RedSun: New Microsoft Defender Zero-Day Lets Unprivileged Users Gain SYSTEM Access
A freshly disclosed zero-day vulnerability in Microsoft Defender, dubbed "RedSun," has raisedβ¦