OpenAI published a blog post yesterday outlining its “commitment to community safety,” framing it as a reassuring overview of unobjectionable safeguards. The post acknowledged that “mass shootings, threats against public officials, bombing attempts, and attacks on communities and individuals are an unacceptable and grave reality in today’s world.” It also reflected on “how quickly violent intent can move from words to action,” emphasizing that people may “bring these moments and feelings into ChatGPT.”

OpenAI stated that it is training its product to “recognize the difference” between hypothetical discussions and imminent threats, drawing lines when conversations “start to move toward threats, potential harm to others, or real-world planning.” The company added that it is working to expand safeguards “to help ChatGPT better recognize subtle signs of risk of harm across different contexts” and will “surface real-world support and refer to law enforcement when appropriate” based on user interactions.

At first glance, the blog post appears to address theoretical concerns about preventing future violence. However, the reality is far more troubling: OpenAI’s flagship chatbot, ChatGPT, has already been directly linked to real-world violence. The most glaring omission in the post was the motivation behind it—news organizations, including Futurism, were reaching out for comment on a new wave of seven lawsuits filed by families of victims from the February school massacre in Tumbler Ridge, British Columbia. These lawsuits were made public the day after the blog post was published.

The Tumbler Ridge shooter was a ChatGPT user. Weeks after the February tragedy, the Wall Street Journal reported that OpenAI’s automated moderation tools had flagged the shooter’s account in June 2025 for graphic descriptions of gun violence. Human reviewers were so alarmed that several urged OpenAI leaders to alert local officials. Instead, the company chose to deactivate the account. OpenAI later admitted that the shooter simply opened a new account—a tactic OpenAI’s customer service has been found to encourage post-deactivation—and continued using the service.

Eight months later, the shooter murdered her mother and stepbrother at home before taking a modified rifle to Tumbler Ridge’s secondary school. The attack resulted in the deaths of five students and one teacher, with more than two dozen others wounded. The murdered students were all between the ages of 12 and 13.

This is not the only instance of ChatGPT being linked to mass violence. Florida investigators have recently launched a criminal probe into ChatGPT’s role in an April incident, further raising concerns about the chatbot’s real-world impact.

Source: Futurism