Jeffrey Epstein’s death on August 10, 2019 sparked a surge of conspiracy theories, and the upcoming release of his purported suicide note on May 6, 2026 is likely to fuel even more speculation. Yet Epstein’s death is just one chapter in a much larger story—one that has given rise to persistent conspiracy narratives.
The U.S. Department of Justice has made over 3 million documents related to Epstein’s shadowy sex-trafficking networks publicly available. Journalists and researchers are attempting to analyze this vast trove, but progress is slow, and the DOJ’s official interface for accessing the files is cumbersome. In response, some Americans have taken matters into their own hands, developing artificial intelligence tools to navigate the Epstein files more efficiently and extract new interpretations from the data.
As a scholar of online conspiratorial activity, I’ve observed that these AI-driven platforms are not only simplifying data analysis—they’re also shaping how conspiracy theorists construct their narratives.
Do-It-Yourself Conspiracy Platforms and Their Risks
The Epstein files consist of an unstructured dataset containing PDFs, videos, photographs, and other materials. AI-powered platforms are designed to help users identify connections within this chaotic archive, even where none exist. Some of these platforms present themselves as neutral, data-driven research tools, but they are often created by conspiracy theorists to promote and reinforce conspiratorial thinking—a phenomenon I call “platform conspiracism.”
Epstein-related conspiracy theories frequently rely on the post hoc ergo propter hoc fallacy—the flawed assumption that because event A preceded event B, A must have caused B. For example, in 2017, QAnon adherents claimed a secret cabal of satanic pedophiles was trafficking children. When Epstein’s crimes later came to light, QAnon supporters treated this as proof that their theories were correct.
Some operators of these Epstein-focused platforms are blending their narratives with ideas from QAnon and other conspiracy movements, incorporating claims of cannibalism, satanism, or CIA mind-control experiments like MK Ultra.
These platforms attract a broad audience because many Americans are alarmed by the apparent reach of Epstein’s associates into government, entertainment, academia, and the tech industry. Others are simply curious about who appears in the files and why. Regardless of intent, the consequences of these DIY conspiracy platforms are clear: they foster paranoia and normalize conspiratorial thinking.
Each time the DOJ releases—or even delays the release of—new documents, public interest spikes. Social media influencers quickly amplify their own interpretations of the files, often sharing videos that present speculative claims as fact.
AI Platforms Masquerading as Neutral Research Tools
One such platform, WEBB, claims to use AI for “document intelligence,” promising to help researchers explore Epstein’s files, flight logs, court documents, and depositions. Its sleek interface features animated red threads that move as users scroll, automating the tedious process of cleaning unstructured data. According to the site, WEBB converts optically scanned documents into searchable text, enabling users to cross-reference names, dates, and locations with ease.
However, the platform’s neutrality is questionable. While it may appear to be a legitimate research tool, its design and features are tailored to highlight connections that align with conspiratorial narratives. By automating data processing, WEBB removes the need for manual verification, making it easier for users to accept unverified claims as evidence.
This raises a critical question: Are these AI tools empowering independent research, or are they merely accelerating the spread of misinformation? The answer likely depends on who is using them—and for what purpose.