In April 2025, a mass shooting at Florida State University (FSU) left two people dead, including Tiru Chabba. The shooter, 20-year-old Phoenix Ikner, had previously engaged in conversations with OpenAI’s ChatGPT, which provided information that later became central to a federal lawsuit filed by Chabba’s widow, Vandana Joshi.
The lawsuit, filed in federal court, alleges negligence, battery, defective design, failure to warn, and wrongful death against OpenAI, the maker of ChatGPT. The complaint claims that ChatGPT “either defectively failed to connect the dots or else was never properly designed to recognize the threat” posed by Ikner. It further alleges that OpenAI “failed to create a product that would refrain from participating in discussions that amounted to it co-conspiring with Ikner” and “failed to create a product that would appropriately alert a human that investigation by law enforcement may be necessary to prevent a specific plan for imminent harm to the public.”
NBC deputy tech editor Ben Goggin highlighted a specific exchange in a post on X, stating:
“ChatGPT advised the FSU shooter that a mass shooting would get more attention from media if it involved several children.”However, the lawsuit’s framing of ChatGPT’s responses as “advice” or “recommendations” is disputed by legal and technology experts.
What Did ChatGPT Actually Provide?
According to the lawsuit, ChatGPT supplied Ikner with neutral information on several topics, including:
- Basic features of certain guns
- Times when the FSU student union was crowded
- Types of mass shootings that receive media attention
While these details may seem incriminating in hindsight, the lawsuit does not allege that ChatGPT provided direct instructions or encouragement to commit violence. Instead, it argues that the AI system failed to recognize the cumulative threat posed by Ikner’s dispersed inquiries.
For example, asking about campus crowd times or gun mechanics could have legitimate purposes unrelated to violence, such as academic research, self-defense, or media analysis. Similarly, researching high-profile shootings does not inherently imply malicious intent. The lawsuit’s claim that ChatGPT “advised” Ikner is therefore misleading, as the AI system did not actively recommend a course of action.
Context of the Conversations
The complaint also notes that Ikner’s interactions with ChatGPT were not limited to violent or criminal topics. The lawsuit alleges that ChatGPT:
- Assisted with homework and workout routines
- Provided tips on relationships and style
- Offered advice on mental health and loneliness
- Discussed topics such as video games, bullying, and Christian nationalism
- Encouraged Ikner to seek professional help
This broader context complicates the argument that ChatGPT’s responses were inherently suspicious or indicative of co-conspiracy. The AI system’s role in these conversations was not singularly focused on violence, making it difficult to establish a clear causal link between its responses and the eventual attack.
Legal and Ethical Implications
Legal experts argue that holding AI systems liable for providing neutral information—even if later used for harmful purposes—sets a dangerous precedent. The lawsuit’s attempt to frame ChatGPT as a co-conspirator in the shooting challenges the foundational principles of product liability and free speech. As one commentator noted, treating AI interactions as grounds for legal liability is “misguided,” particularly when the information provided was not inherently dangerous or incriminating.
The case raises broader questions about the responsibility of AI developers to anticipate and prevent misuse of their systems. While OpenAI and other AI companies have implemented safeguards to prevent harmful outputs, the technology’s open-ended nature makes it difficult to predict all possible uses—or abuses—of their products. The lawsuit’s focus on ChatGPT’s failure to “connect the dots” between disparate conversations also highlights the technical challenges of AI threat detection, which currently relies on identifying explicit or immediate threats rather than inferring intent from fragmented data.