Listen to the article

0:00
0:00

Families of Canadian school shooting victims have launched a landmark lawsuit against OpenAI, claiming the ChatGPT creator failed to alert authorities about the shooter’s alarming interactions with the AI platform before the deadly attack.

The lawsuit, filed Wednesday on behalf of 12-year-old Maya Gebala, who was critically injured in the February shooting in Tumbler Ridge, British Columbia, represents the first of dozens of planned legal actions from families affected by the tragedy. The cases will allege wrongful death, negligence, and product liability against the AI company.

“The decisions made by OpenAI and its CEO Sam Altman have destroyed the town,” said plaintiffs’ attorney Jay Edelson in an interview. “The people are really resilient, but what happened is unimaginable.”

The February 10 shooting in the small Canadian Rockies town left seven people dead, including five children and an educator, before the shooter took her own life. An additional 25 people were injured in what has been described as Canada’s deadliest mass shooting in years. The shooter killed her mother and 11-year-old stepbrother at their home before opening fire at Tumbler Ridge Secondary School.

Last week, OpenAI CEO Sam Altman issued a formal apology to the community, acknowledging that the company failed to notify law enforcement about the shooter’s online behavior. OpenAI has since revealed that it flagged the shooter’s account in June 2023 after detecting discussions about violence but determined at that time the activity didn’t meet their threshold for referral to authorities. The company subsequently banned the account for violating its usage policies.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman wrote in his letter. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

British Columbia Premier David Eby responded to Altman’s statement on social media, calling the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

The case brings renewed focus to the potential dangers of AI systems and the tech industry’s responsibility in monitoring and reporting concerning user interactions. It follows other recent incidents linking ChatGPT to violent acts, including prosecutors investigating the deaths of two University of South Florida doctoral students who said the suspect had queried ChatGPT about body disposal before the students’ disappearance.

Edelson, a Chicago-based attorney known for challenging tech companies, is already representing families in several high-profile cases against OpenAI. These include the family of a California teenager who committed suicide after conversations with ChatGPT and heirs of an 83-year-old Connecticut woman allegedly killed by her son after ChatGPT reportedly amplified his “paranoid delusions.”

“This is not a passive technology,” Edelson explained, distinguishing AI chatbots from conventional online searches. “What we’ve seen in the past is that for people who are mentally ill, the chatbot will validate what they’re saying and then amplify what they’re saying.”

The lawsuits filed Wednesday represent families of the five children killed in the school shooting: Zoey Benoit, Abel Mwansa Jr., Ticaria “Tiki” Lampert, Kylie Smith, all 12, and Ezekiel Schofield, 13, as well as education assistant Shannda Aviugana-Durand.

According to the legal filings, “the victims didn’t learn this because OpenAI was forthcoming, but because its own employees leaked it to The Wall Street Journal after they could no longer stomach the company’s silence.”

In response to the lawsuit, OpenAI stated: “The events in Tumbler Ridge are a tragedy. We have a zero-tolerance policy for using our tools to assist in committing violence.” The company claims to have strengthened its safeguards, including improving ChatGPT responses to signs of distress, connecting users with local support and mental health resources, and enhancing detection of policy violations.

The Gebala lawsuit seeks not only damages but also court orders that would require OpenAI to permanently ban users whose accounts were deactivated for violent misuse and to mandate that the company alert law enforcement when their systems identify someone posing a “real-world risk of violence.”

While an earlier case was filed in British Columbia, a team of lawyers from both the U.S. and Canada is working to bring affiliated cases to San Francisco, where OpenAI is headquartered, potentially setting a precedent for how AI companies are held accountable for user activities on their platforms.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

20 Comments

  1. Liam Hernandez on

    Interesting update on Families of Tumbler Ridge school shooting victims sue ChatGPT maker OpenAI. Curious how the grades will trend next quarter.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.