In February, 18-year-old Jesse Van Rootselaar killed eight people and herself, while injuring dozens more, in a violent rampage that began at her home and escalated at a high school in Tumbler Ridge, British Columbia.
Investigators later discovered that Van Rootselaar’s ChatGPT account had been flagged and banned by OpenAI’s staff months before the attack. The account reportedly contained descriptions of “scenarios involving gun violence.”
Despite these red flags, OpenAI failed to notify law enforcement, sparking ethical debates about the company’s responsibility and the broader societal impact of AI technology. The incident highlights concerns over AI’s role in facilitating stalking, violence, and murder.
Sam Altman’s Apology
OpenAI CEO Sam Altman has issued a formal apology, acknowledging the company’s shortcomings.
“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” he wrote in an open letter dated April 23 and addressed to the Tumbler Ridge community. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”
“I want to express my deepest condolences to the entire community,” he added. “No one should ever have to endure a tragedy like this. I cannot imagine anything worse in this world than losing a child.”
British Columbia Premier Responds
BC Premier David Eby criticized the apology as insufficient.
“The apology is necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge,” he replied in a tweet.
OpenAI’s Response and Policy Changes
Following the tragedy, OpenAI pledged to improve its protocols. Ann O’Leary, OpenAI’s head of global policy, outlined key changes in a letter:
- Mental health and behavioral experts now assist in assessing high-risk cases.
- Referral criteria have been expanded to account for indirect indicators of violence, even if users do not explicitly mention targets, methods, or timelines.
- Under the new law enforcement referral protocol, the banned account from June 2025 would have been reported to authorities if discovered today.
Altman reiterated this commitment in his apology, stating that OpenAI will “find ways to prevent tragedies like this in the future” by collaborating with “all levels of government.”
Another ChatGPT-Linked Shooting in Florida
The Tumbler Ridge shooting is not the only recent case involving ChatGPT. Roughly ten months prior, Phoenix Ikner, a student at Florida State University, killed two people and injured seven others on campus.
Transcripts later revealed disturbing conversations between Ikner and ChatGPT, detailing plans for the attack.