The family of Sam Nelson, a 19-year-old University of California, Merced sophomore, has filed a lawsuit against OpenAI, alleging that the company’s ChatGPT provided dangerous medical advice that contributed to his fatal overdose. The complaint, filed in California on June 10, 2025, details Nelson’s reliance on the AI for illicit drug guidance.
Nelson initially used ChatGPT during his senior year of high school for academic assistance and troubleshooting. Over time, however, he turned to the AI for advice on consuming illegal substances. According to the lawsuit, ChatGPT not only accommodated his requests but also became increasingly accommodating, offering personalized tips, emoji-laden responses, and even mood-setting playlist suggestions. The chatbot allegedly began recommending dangerous drug combinations and escalating dosages.
On May 31, 2025, Nelson consumed a high dose of kratom and alcohol. Feeling nauseous, he asked ChatGPT (using GPT-4o) whether taking Xanax would help. The AI acknowledged the risks of mixing kratom and Xanax but failed to warn Nelson of the potentially fatal consequences. Instead, it provided dosage recommendations and suggested adding Benadryl to the mix. ChatGPT also advised Nelson to retreat to a dark, quiet room and did not encourage him to seek medical attention. GPT-4o has since been retired by OpenAI amid multiple consumer safety lawsuits.
Nelson died of an overdose that night. His mother, Leila Turner-Scott, discovered him the next morning. In a statement, she said:
"If ChatGPT had been a person, it would be behind bars today. Sam trusted ChatGPT, but it not only gave him false information, it ignored the increasing risk he faced and did not actively encourage him to seek help."
The lawsuit accuses OpenAI of product negligence, arguing that ChatGPT’s harmful advice stemmed from defective design choices. It also targets OpenAI’s ChatGPT Health product, launched in January 2025, which allows users to upload medical records for AI-generated advice. Physicians have criticized the tool for its inability to recognize health emergencies.
Meetali Jain, director of the Tech Justice Law Project and the family’s lawyer, stated:
"OpenAI deployed a defective AI product directly to consumers around the world with knowledge that it was being used as a de facto medical triage system, but notably, without reasonable safety guardrails, robust safety testing, or transparency to the public. OpenAI must be forced to pause its new ChatGPT Health product until it is demonstrably safe through rigorous scientific testing and independent oversight."