Since the launch of ChatGPT in 2022, artificial intelligence has disrupted numerous industries, but few have felt its impact as profoundly as software development. Developers at every skill level—from seasoned professionals to complete novices—have rapidly adopted AI tools, using chatbots and specialized platforms to generate code from natural language prompts.

This trend, known as vibe coding, enables nearly anyone to create entire applications in minimal time, regardless of their technical expertise. While the speed and accessibility are undeniably impressive, the approach comes with significant risks.

AI-Generated Apps Riddled with Security Flaws

A new report from cybersecurity firm RedAccess highlights a growing threat: many vibe-coded applications are deployed with glaring security vulnerabilities. The findings, detailed in a Wired investigation, reveal that thousands of apps built on platforms like Lovable, Replit, Base44, and Netlify are exposing sensitive user data.

According to RedAccess, 5,000 of these apps had virtually no security or authentication measures, while 40% exposed confidential information, including:

  • Medical records
  • Financial data
  • Corporate documents
  • Private chatbot conversation logs

“The end result is that organizations are actually leaking private data through vibe-coding applications. This is one of the biggest events ever where people are exposing corporate or other sensitive information to anyone in the world.”

Dor Zvi, Co-founder of RedAccess

Platforms Shift Blame to Users

In response to RedAccess’s findings, the affected platforms offered inadequate solutions. Netlify ignored the report entirely, while others, including Lovable, deflected responsibility onto users, arguing that creators must secure their own applications.

“We’re treating this as an ongoing matter. It’s also worth noting that Lovable gives builders the tools to build securely, but how an app is configured is ultimately the creator’s responsibility.”

Lovable spokesperson

However, this response overlooks a critical issue: AI-generated code is inherently flawed. Without human oversight from experienced developers or security experts, vulnerabilities often go unnoticed—yet these apps are marketed as tools that eliminate the need for such expertise.

“Anyone from your company at any moment can generate an app, and this is not going through any development cycle or any security check. People can just start using it in production without asking anyone. And they do.”

Dor Zvi, Co-founder of RedAccess

AI Development Tools Raise Serious Concerns

The risks of vibe coding extend beyond data leaks. Earlier this year, a vibe-coded operating system was exposed as a “bug-filled disaster,” further underscoring the dangers of unchecked AI-generated software.

Source: Futurism