OpenAI Hit With Barrage of Lawsuits Over Failure to Report School Shooter Before Massacre

Seven families — the first wave of dozens, lawyers say — are suing OpenAI, alleging that the company failed to provide Canadian authorities with information that could’ve prevented a horrific school shooting in the rural mining town of Tumbler Ridge, British Columbia, despite having advance knowledge of the shooter’s disturbing conversations with the chatbot.

The lawsuits also claim that OpenAI has misled the public about the steps it says it took stop the shooter from using ChatGPT to discuss mass violence.

In early February, 18-year-old Jesse Van Rootselaar killed her mother and younger stepbrother before traveling to Tumbler Ridge’s secondary school, where she opened fire on students and teachers using a modified rifle.

Five students, all aged between 12 and 13, and a teacher were murdered. Twenty-seven more people were wounded, some severely. Several parents were forced to identify their children by their clothing because the damage wrought on the kids’ young bodies was so extreme. The shooter died by suicide.

Like millions of other people, Van Rootselaar was a ChatGPT user. In late February, a bombshell Wall Street Journal report revealed that in June 2025, months before the eventual shooting, OpenAI’s automated moderation tools flagged Van Rootselaar’s account for graphic discussions of mass violence. Human reviewers at the company were alarmed by the content, and — convinced that Van Rootselaar’s interactions with ChatGPT represented a credible imminent threat to the lives of others — they urged OpenAI executives to warn Canadian law enforcement.

After a debate that reportedly involved about a dozen staffers, OpenAI leaders chose to say nothing, and moved instead to deactivate Van Rootselaar’s account.

Filed in California, the lawsuits — which describe ChatGPT as a “co-conspirator” in the school massacre — contend that had OpenAI alerted law enforcement, local officials could’ve intervened before it was too late. OpenAI’s inaction, the lawsuits allege, was a business decision spurred by the potential future liability that reporting troubling interactions like Van Rootselaar’s would invite, and how that liability could stand to impact the company’s ongoing momentum toward an IPO.

The plaintiffs include the families of each victim murdered at the school: 13-year-old Ezekiel Schofield; 12-year-old Zoey Benoit; 12-year-old Ticaria “Tiki” Lampert; 12-year-old Abel Mwansa Jr.; 12-year-old Kylie Smith; and 39-year-old education assistant Shannda Aviugana-Durand.

Among the plaintiffs is also the family of Maya Gebala, a 12-year-old who was shot three times in the head and neck. Gebala survived, but with “catastrophic” injuries to her brain and remains in critical condition. (In March, Gebala’s family filed a lawsuit against OpenAI in Canada; this new suit supersedes the family’s initial filing.)

The families are seeking to hold OpenAI “accountable” for “designing a dangerous product, ignoring the warnings of their own safety team, refusing to notify authorities when they knew the Shooter was planning a mass attack, inviting them back onto the platform after deactivating their account,” the lawsuits collectively read, “and choosing profit over the lives of the children of Tumbler Ridge.”

***

As OpenAI confirmed in February, Van Rootselaar’s account was deactivated in June 2025 for conversations so extreme that it kicked off a debate among high-level staff at one of the world’s buzziest AI companies.

After the deactivation, the lawsuits point out, Van Rootselaar quickly created a new account. Despite the existence of this second account, OpenAI has continued to refer to its deactivation of Van Rootselaar’s account as a “ban,” language that OpenAI CEO Sam Altman reiterated as recently as Friday when he issued a public letter apologizing to the people of Tumbler Ridge.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman wrote in the letter. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

The company also characterized the shooter’s creation of a second account as an evasion of its guardrails, which it claims are designed to prevent repeat offenders from doing exactly what Van Rootselaar did: starting up a new account when one is deactivated for safety violations.

According to the lawsuits, however, Van Rootselaar didn’t “evade” OpenAI’s guardrails, a word that suggests she engaged in some level of complicated trickery to get around safeguards. Rather, the suits allege, the killer simply followed the company’s advice.

When a user account is deactivated, as the lawsuit outlines, OpenAI’s customer service advises users that they can “create a new account using the same email address once 30 days have passed since the deletion.” But if a user would “prefer not to wait,” the company continues, they “have the option to register immediately using an alternative email address.”

The message even goes on to advise users on how to make an email alias for this purpose.

“While your email provider will likely treat both addresses the same,” the message continues, “our system will recognize the sub-address as a new account.”

Van Rootselaar’s seemingly uncomplicated creation of a second account — which used her real name, as OpenAI appeared to admit in a February letter to Canada’s AI minister — doesn’t sound much like the “ban” that OpenAI has claimed, in other words.

As the lawsuits put it, “there were no safeguards to evade.”

“The Shooter simply followed OpenAI’s own instructions to create a new account after being banned. The ‘safeguards’ OpenAI pointed to after the attack did not fail; they did not exist,” they read. “OpenAI lied because the truth is worse: the company does not ban users for violent activity. It tells them how to come back in.”

***

Despite Altman’s recent apology, OpenAI has largely defended its decision not to alert law enforcement, arguing that its leaders, unlike its concerned safety staff, didn’t believe that the shooter’s chat logs pointed to an imminent threat. The company has also pointed to concerns about user privacy.

But exactly how OpenAI calculates imminent risk — or lack thereof — remains unclear. The self-regulated AI industry is without any enforced or even loosely-agreed-upon reporting thresholds, even when it comes to potential mass casualty events. As Wired reported earlier this month, OpenAI is even backing legislation in Illinois that would shield it from liability in AI-tied mass casualty events in which 100 or more people are killed or injured.

OpenAI declined to respond to Futurism’s request for comment.

The Tumbler Ridge cases come as OpenAI faces scrutiny over ChatGPT’s role in another mass shooting. Chat logs obtained by the Florida Phoenix show 20-year-old Florida State University (FSU) student Phoenix Ikner, who killed two adults and wounded seven people during an April 2025 rampage on FSU’s campus, obsessively communicated with ChatGPT during the leadup to the shooting. In these disturbing conversations, Ikner engaged with the chatbot in descriptions of child abuse, referred to himself as an “incel” and “ugly,” wondered whether the Oklahoma City bomber was “right,” and discussed a possible shooting at his university. Just minutes before opening fire, he asked the bot how to turn off the safety on one of his weapons.

Mass shootings aren’t the only kinds of violence linked to ChatGPT use. The widely-used chatbot has also played a concerning role in domestic abuse and stalking, and is continuing to show up in murder cases, with chat logs revealing the AI as a willing conversation partner in users’ worsening fixations on other real people.

OpenAI is also facing more than a dozen lawsuits from AI users or their family members who claim that ChatGPT pulled users into delusional or suicidal spirals that caused them psychological harm, reputational damage, financial ruin, and even suicide.

The details of the Tumbler Ridge massacre are extraordinarily painful. As the lawsuits make clear with horrifying details, the victims died in terrible ways. Gebala, the girl who was shot while trying to lock a door to keep the shooter out, will likely live with permanent disabilities if she survives.

The surviving children of Tumbler Ridge, meanwhile, attend classes in trailers, as their rural mining town’s empty secondary school awaits demolition.

More on ChatGPT-linked mass violence: The Florida Mass Shooter’s Conversations With ChatGPT Are Worse Than You Could Possibly Imagine

The post OpenAI Hit With Barrage of Lawsuits Over Failure to Report School Shooter Before Massacre appeared first on Futurism.

Scroll to Top