Listen to the article

0:00
0:00

OpenAI Considered Alerting Canadian Police Before School Shooter’s Rampage

OpenAI revealed Friday that it had identified and banned the account of Jesse Van Rootselaar in June 2023, nearly a year before the 18-year-old carried out one of Canada’s deadliest school shootings in Tumbler Ridge, British Columbia.

The San Francisco-based artificial intelligence company, known for creating ChatGPT, said it had flagged Van Rootselaar’s account through its abuse detection system for “furtherance of violent activities.” At the time, OpenAI considered notifying the Royal Canadian Mounted Police (RCMP) but ultimately determined the account activity didn’t meet its threshold for law enforcement referral.

That threshold, according to the company, requires an “imminent and credible risk of serious physical harm to others.” OpenAI stated they didn’t identify credible or imminent planning in Van Rootselaar’s activities that would have warranted contacting authorities.

The revelation comes in the aftermath of last week’s tragedy where Van Rootselaar killed eight people, including her mother and stepbrother at the family home before attacking a nearby school. The shooter died from a self-inflicted gunshot wound following the rampage.

“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” an OpenAI spokesperson said. “We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”

RCMP Staff Sgt. Kris Clark confirmed that OpenAI contacted police after the shootings occurred. Authorities are now conducting a “thorough review of the content on electronic devices, as well as social media and online activities” belonging to Van Rootselaar. Digital and physical evidence is being “collected, prioritized, and methodically processed,” according to Clark.

The incident raises difficult questions about the responsibility of AI companies to monitor user behavior and when to notify law enforcement about potential threats. The balance between user privacy and public safety has become increasingly complex as AI systems become more capable of generating potentially harmful content.

The remote town of Tumbler Ridge, with approximately 2,700 residents, is located in the Canadian Rockies, more than 1,000 kilometers northeast of Vancouver near the Alberta border. The shooting victims included a 39-year-old teaching assistant and five students between the ages of 12 and 13.

Police have reported that Van Rootselaar had previous mental health contacts with law enforcement, though the motive for the shooting remains unclear. Authorities continue to investigate the circumstances that led to the tragedy.

This attack represents Canada’s deadliest rampage since 2020, when a gunman in Nova Scotia killed 13 people and set fires that resulted in nine additional deaths. The shooting has reignited discussions about gun control, mental health resources, and school safety measures across Canada.

As AI tools become more integrated into daily life, incidents like this highlight the potential warning signs that may appear in user interactions with these platforms. Technology companies face mounting pressure to develop more sophisticated monitoring systems while respecting privacy boundaries.

The RCMP investigation continues as the community of Tumbler Ridge grieves its profound loss. Memorial services for the victims are being planned as the town begins its difficult healing process.

OpenAI’s disclosure, first reported by The Wall Street Journal, adds a new dimension to ongoing debates about the ethical obligations of AI companies and the potential for early intervention in cases where users demonstrate concerning behavior.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. Jennifer M. White on

    This is a very difficult and sensitive situation. Transparent dialogue between tech companies, policymakers, and public safety officials will be crucial to finding the right balance between user privacy and public welfare. Lessons learned could impact many industries.

  2. Tragic that the warning signs were apparently missed. Improved collaboration between tech companies and law enforcement may be needed to better identify and respond to such threats in the future.

    • Linda U. Thomas on

      I agree, finding the right way to share information while respecting privacy is crucial. Hopefully lessons from this will lead to policies that can save lives down the line.

  3. The mining and commodities sectors will be watching this closely, as any perceived failures in threat detection could impact public confidence and regulatory oversight. Proactive transparency from companies like OpenAI is important.

  4. This is a disturbing situation. OpenAI’s challenge in determining when to alert authorities about potential threats is a difficult balance. More transparency around their abuse detection and reporting policies could help build public trust.

  5. William Thomas on

    While the details are still emerging, this is a heartbreaking situation. Strengthening collaboration between tech companies, mental health professionals, and law enforcement may be needed to better identify and intervene before such tragedies occur.

  6. Curious to see if this incident prompts changes in how tech firms handle abuse detection and law enforcement referrals, especially for industries like mining that deal with sensitive topics. Balancing user privacy with public safety is an ongoing challenge.

  7. John Rodriguez on

    Tragic that the warning signs were apparently missed. Improved information sharing between tech firms, mental health experts, and law enforcement could potentially help identify and mitigate such threats in the future, though privacy concerns must also be addressed.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.