Listen to the article
OpenAI Employees Flagged Mass Shooter’s Concerning Chatbot Interactions Months Before Tragedy
OpenAI employees were aware of disturbing interactions between Jesse Van Rootselaar and the company’s ChatGPT system months before the 18-year-old carried out one of Canada’s deadliest mass shootings in Tumbler Ridge, British Columbia, according to a new Wall Street Journal report.
Approximately a dozen employees at the artificial intelligence company reportedly knew about Van Rootselaar’s concerning conversations, which included violent scenarios involving guns over multiple days. These interactions were initially flagged by an automated review system designed to identify potentially dangerous content.
Despite some employees advocating to contact law enforcement, OpenAI ultimately decided against alerting authorities. The company’s policy stipulates that it will only notify police if there is an “imminent threat of real-world harm or violence,” a threshold the company determined had not been met in this case.
On February 10, Van Rootselaar killed his mother and step-brother at their home before proceeding to Tumbler Ridge Secondary School, where the former student fatally shot five students and a teacher before taking his own life. Twenty-five others were reportedly injured in the attack.
“We banned the account in June 2025 for violating our usage policies,” an OpenAI spokesperson told Fox News Digital, adding that the company did not believe the activity warranted law enforcement notification. The spokesperson emphasized the delicate balance between safety concerns and privacy rights, noting that excessive reporting to police could potentially create unintended harm.
Authorities later revealed that Van Rootselaar was a biological male who had been identifying as female since age six and had a history of mental health struggles. Police had previously responded to incidents at the teen’s home on multiple occasions.
According to reports by the New York Post, Van Rootselaar had developed an obsession with death, frequently visiting websites hosting videos of murders. The teen’s social media presence included images with firearms and content related to hallucinogenic drugs. As early as 2015, Van Rootselaar’s mother had expressed concerns about her child’s behavior in a Facebook parents’ group.
The case raises critical questions about the responsibility of AI companies to monitor and report potentially dangerous user behavior. OpenAI has designed its chatbot to discourage real-world harm when it detects dangerous scenarios, but the effectiveness of such safeguards is now under scrutiny.
Following the shooting, OpenAI proactively contacted the Royal Canadian Mounted Police (RCMP) and is cooperating with their investigation by providing information about Van Rootselaar’s interactions with ChatGPT.
“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” the company stated. “We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”
This incident occurs amid growing debate about AI ethics and safety protocols as these technologies become more prevalent in everyday life. Tech companies like OpenAI face increasing pressure to establish clear guidelines for when to intervene when users exhibit concerning behavior, particularly as AI systems become more sophisticated and widely used.
The tragedy has sparked renewed discussions about the intersection of mental health, technology use, and violence prevention, highlighting the complex challenges that emerge as artificial intelligence becomes more deeply integrated into society.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
It’s alarming that OpenAI failed to notify law enforcement about this case. Even if the interactions didn’t meet their internal criteria, they had a moral and ethical obligation to share the information. Their policies need to be updated to prioritize public safety over other concerns.
Absolutely. OpenAI should not be making these judgment calls on their own. They need to establish clear protocols for escalating potential threats to the proper authorities, no matter how they interpret the risk level internally.
This is a heartbreaking situation. OpenAI’s decision not to alert the police about the shooter’s concerning interactions with ChatGPT is extremely troubling. They need to take a hard look at their policies and put public safety first, even in cases that don’t meet a strict ‘imminent threat’ criteria.
This is a sobering reminder that AI companies need to have robust protocols in place for identifying and escalating potential threats. OpenAI’s failure to notify authorities in this case is extremely concerning and they need to be held accountable.
Absolutely. AI systems are powerful tools, but companies like OpenAI have a duty of care to the public. They clearly need to re-evaluate their threat assessment processes and be more proactive in sharing information with law enforcement.
This is a tragic failure on OpenAI’s part. They had a responsibility to be more proactive in sharing information that could have prevented such a devastating loss of life. Their decision-making process around ‘imminent threat’ seems dangerously flawed and needs to be thoroughly re-examined.
Agreed. Even if the interactions didn’t meet their internal criteria, they should have erred on the side of caution and alerted law enforcement. The stakes are too high for them to take such a narrow view of their responsibilities.
This is a concerning situation. If OpenAI employees were aware of the shooter’s disturbing interactions with ChatGPT, they should have taken stronger action to alert the authorities, even if it didn’t meet their threshold for ‘imminent threat’. Public safety has to be the top priority.
I agree. OpenAI’s policy seems too narrow – they need to be more proactive in reporting potential warning signs, even if the threat isn’t immediate. Lives could have been saved if they’d alerted police.