Seven lawsuits filed on Wednesday in a California court allege that OpenAI could have prevented one of Canada’s deadliest school shootings by acting on internal safety warnings about the shooter’s ChatGPT account.
The lawsuits claim that more than eight months before the shooting, OpenAI’s internal safety team flagged the account as a credible threat of real-world gun violence. The company was expected to notify law enforcement—especially since local police already had a file on the shooter and had previously removed firearms from their home—but failed to do so.
According to whistleblowers cited by The Wall Street Journal, OpenAI leadership decided that the user’s privacy and the potential stress of a police encounter outweighed the risks of violence. Instead of reporting the threat, the company deactivated the account and later instructed the shooter on how to regain access by signing up with a new email address, the lawsuits allege.