Listen to the article

0:00
0:00

OpenAI Reveals Canada School Shooter Used Secondary Account to Bypass Ban

ChatGPT creator OpenAI disclosed Thursday that the perpetrator of one of Canada’s deadliest school shootings had circumvented a previous account ban by creating a second profile on the AI platform. The company made this revelation in a letter to the Canadian government outlining immediate safety measures it plans to implement.

According to Ann O’Leary, OpenAI’s vice president for global policy, the company discovered shooter Jesse Van Rootselaar’s second account only after her name was released by the Royal Canadian Mounted Police. Van Rootselaar killed eight people before taking her own life in Tumbler Ridge, British Columbia, on February 10.

“The shooter somehow evaded systems to prevent banned users from creating new accounts,” O’Leary explained in the letter. OpenAI shared the second account with law enforcement immediately upon its discovery.

Van Rootselaar’s initial ChatGPT account was terminated in June 2025 following a violation of the company’s usage policies. The account was first flagged by automated systems and then reviewed by human moderators to determine if its content warranted referral to law enforcement.

“Based on what we could see at that time the account was banned in June 2025, we did not identify credible and imminent planning that met our threshold to refer the matter to law enforcement,” O’Leary stated.

In response to the tragedy, OpenAI is implementing several changes to its safety protocols. The company pledged to strengthen its detection systems to better prevent banned users from creating new accounts and to “prioritize identifying the highest risk offenders.” It will also lower thresholds for contacting authorities “when conversations cross the line into an imminent and credible risk.”

“With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today,” O’Leary acknowledged.

British Columbia Premier David Eby told reporters Thursday that OpenAI CEO Sam Altman has agreed to meet with him to discuss the incident. Eby noted that while OpenAI had informed his government that recent changes to their protocols would have resulted in police notification before the killings, this was “cold comfort” for the families affected in Tumbler Ridge.

The case highlights growing concerns about AI platforms’ responsibility in monitoring potentially dangerous content. Canada’s Artificial Intelligence Minister Evan Solomon summoned OpenAI representatives to Ottawa this week to explain their safety procedures and decision-making processes in the wake of the shooting.

“All options are on the table,” Solomon stated, as the government develops a “suite of measures” to address online harms and other digital policy issues. The incident may accelerate regulatory discussions about AI safety and accountability across North America.

According to police, Van Rootselaar first killed her mother and stepbrother at their family home before attacking the nearby school. Authorities noted that she had a history of mental health contacts with police, though the motive for the shooting remains unclear.

OpenAI also committed to developing a direct point of contact with Canadian law enforcement to facilitate faster communication in future situations that may pose public safety risks.

“The events in Tumbler Ridge are an unspeakable tragedy, and our hearts remain with the victims, their families, and the entire community,” O’Leary wrote in the company’s letter.

This attack was Canada’s deadliest mass shooting since 2020, when a gunman in Nova Scotia killed 22 people in a rampage that included multiple shootings and arson.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

12 Comments

  1. Elijah T. White on

    I’m curious to know more about the specific vulnerabilities that allowed this shooter to bypass the ban. What security gaps enabled them to create a new account, and how can OpenAI close those loopholes? Transparency around their remediation efforts would be appreciated.

    • Lucas J. Moore on

      Absolutely, full disclosure of the technical details and the steps being taken to plug these holes is critical. The public deserves to understand how this happened and have confidence in the platform’s safeguards going forward.

  2. This is a stark reminder of the potential for misuse of AI platforms, even by those previously identified as risks. OpenAI must take decisive action to enhance its user verification, content moderation, and safety controls. Proactive measures are essential to prevent such tragedies from occurring again.

    • Agreed. The stakes are too high for any gaps in security or lapses in responsibility. OpenAI needs to act swiftly and transparently to regain public trust and ensure their technology is not exploited for harmful ends.

  3. This is a devastating tragedy, and my heart goes out to the victims and their loved ones. OpenAI’s revelation that the shooter circumvented their ban is deeply concerning. Strengthening user verification and content moderation must be an urgent priority to uphold public trust and safety.

    • I agree, this incident has shaken confidence in the platform. OpenAI needs to be fully transparent about the security lapses that enabled this and outline concrete steps to address them. Regaining the public’s trust will be crucial moving forward.

  4. This is an alarming development. It’s deeply troubling that a banned user could circumvent OpenAI’s safeguards and create a new account to carry out such a heinous act. Robust identity verification and stronger moderation protocols are clearly needed to prevent such tragedies in the future.

    • Olivia Thompson on

      I agree, OpenAI needs to urgently review and strengthen its account creation and user verification processes. Allowing banned individuals to regain access is a major security vulnerability that must be addressed.

  5. It’s alarming that the shooter was able to create a new account after being banned. This points to significant vulnerabilities in OpenAI’s systems that need to be addressed immediately. Robust identity verification and more stringent account management controls are clearly necessary.

    • William C. Lee on

      Absolutely. OpenAI must conduct a thorough review of its security protocols and make the necessary improvements to prevent banned users from regaining access. Transparency around these efforts will be crucial in rebuilding public confidence.

  6. Lucas Williams on

    While the details are still emerging, this incident underscores the importance of robust identity verification and account management for AI platforms. OpenAI must conduct a thorough investigation and implement comprehensive safeguards to prevent similar breaches in the future.

    • Jennifer Lopez on

      Absolutely. The safety and security of users should be the top priority. OpenAI’s response will be closely watched, and they must demonstrate a clear commitment to preventing such abuses of their technology.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.