A security researcher revealed that Lovable, a vibe-coding platform, exposed users’ chat histories with AI models, source code, database credentials, and customer data through its API. The exposure affected all projects created before November 2025.
Researcher Discovers Data Exposure via API
On Monday, X user @weezerOSINT reported the exposure in a post, stating:
“I made a Lovable account today and was able to access another user’s source code, database credentials, AI chat histories, and customer data are all readable by any free account.”
The post included a screenshot of another Lovable user’s project code and chats, along with an unresolved ticket for the bug that allegedly caused the data leak.
Lovable’s Initial Response and Clarification
In response to the report, Lovable initially claimed on X that no “data breach” had occurred, stating that exposing project code was “intentional behavior” for public projects. The company explained that users who mark their projects “public” opt to have their code visible to others.
However, this did not account for the exposure of users’ chats and prompts with the AI model, which Lovable later acknowledged was unintended.
Retroactive Patch and Backend Error
Lovable later clarified that it had retroactively patched its API to prevent public project chats from being accessed. The company admitted that in February, while unifying permissions in its backend, it accidentally re-enabled access to chats on public projects.
Lovable stated:
“We’re sorry our initial statement didn't properly address our mistake. Here's what a public project on Lovable means, and how we got to where we are today.”
Bug Reported via HackerOne in Early March
@weezerOSINT reported the issue to HackerOne, a cybersecurity company that runs bug bounty programs, in early March. However, Lovable claims the ticket was closed because its “HackerOne partners” believed viewing public projects’ chats was “the intended behavior.”
Efficiency of AI in Security Research
In a follow-up conversation with Fast Company, @weezerOSINT (who did not share his real name) noted that it took just 30 minutes using xAI’s Grok 4.2 model to conduct the research. Before AI, finding similar exposures would typically take hours or days.