When a startup fails, its digital footprint doesn’t have to go to waste. Instead, some defunct companies are turning their internal data—Slack messages, emails, and project tickets—into lucrative assets by selling them to AI firms for training purposes.
According to a Forbes report, this practice is becoming increasingly common, with former startup leaders and shutdown facilitators confirming its financial viability.
How Failed Startups Are Monetizing Their Data
Shanna Johnson, CEO of the now-defunct software company Cielo24, revealed that her team sold every Slack message, internal email, and Jira ticket as training data for “hundreds of thousands of dollars.”
This trend isn’t isolated. SimpleClosure, a startup specializing in helping companies wind down operations, reported a surge in demand from AI companies seeking workplace data. In response, SimpleClosure launched a tool enabling businesses to sell their internal communications—including Slack archives and email chains—to AI labs. The company has processed 100 such deals in the past year, with payouts ranging from $10,000 to $100,000.
Privacy Concerns in the Data Economy
While data anonymization is often cited as a safeguard, experts warn that even sanitized workplace communications can expose personally identifiable information, particularly for long-term employees.
“I think the privacy issues here are quite substantial. Employee privacy remains a key concern, particularly because people have become so dependent on these new internal messaging tools like Slack. . . . It’s not generic data. It’s identifiable people.”
Marc Rotenberg, founder of the Center for AI and Digital Policy, emphasized the risks, noting that the data isn’t just abstract—it’s tied to real individuals.
Workplace Privacy Tensions Amid AI Adoption
AI integration in the workplace has sparked ethical debates, with employees citing data privacy as a major deterrent. A Gallup poll found that ethical opposition and privacy concerns are among the top reasons workers resist AI tools on the job.
Privacy anxieties extend beyond AI. A 2024 survey by Checkr, a background check platform, revealed that nearly half of 3,000 respondents would accept a pay cut to avoid employer tracking of their online activity.
The Rise of New AI Training Data Models
Large language models have traditionally relied on publicly available data, such as news articles, books, and social media posts. However, advanced agentic models—AI systems capable of autonomous decision-making—require more nuanced datasets. These include internal documents, emails, and FAQs that provide context, feedback, and real-time workplace dynamics.
The growing demand for workplace data is fueling entirely new business models. AfterQuery, a San Francisco-based research lab, develops digital office “worlds” that AI labs purchase to train agents in navigating real-world workplaces. These datasets encompass everything from Slack channels organizing team happy hours to emails troubleshooting technical issues—all valuable assets in the AI training economy.
As this trend accelerates, the line between workplace efficiency and privacy erosion continues to blur. One day, AI agents may handle mundane tasks like planning happy hours or drafting emails—but their training will owe much to the digital remnants of startups that never quite made it.