Listen to the article
Florida’s attorney general launched a criminal investigation Tuesday into OpenAI’s ChatGPT, examining whether the artificial intelligence application bears legal responsibility in connection with a deadly shooting at Florida State University last year that left two people dead and six others wounded.
Attorney General James Uthmeier announced during a Tampa news conference that prosecutors have conducted an initial review of chat logs between the gunman, Phoenix Ikner, and the AI chatbot to determine if the application aided or abetted the crime.
“This criminal investigation will determine whether OpenAI bears criminal responsibility for ChatGPT’s actions in the shooting at Florida State University last year,” Uthmeier said.
The investigation marks one of the first instances where a state has initiated a criminal probe into an AI company’s potential liability for a violent crime. Florida’s Office of Statewide Prosecution has issued subpoenas to OpenAI demanding records of its policies and training materials regarding threats of harm, as well as protocols for reporting “possible past, present, or future crime.”
OpenAI has firmly denied any responsibility in the shooting. Company spokeswoman Kate Waters called the incident a tragedy but maintained that ChatGPT played no culpable role.
“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” Waters stated in an email response. She added that OpenAI proactively shared information with law enforcement and continues to cooperate with investigators.
The investigation comes amid growing national concern about the potential misuse of increasingly sophisticated AI tools. While AI companies typically include guardrails in their systems designed to prevent assistance with illegal activities, critics have demonstrated various ways to circumvent these protections.
Phoenix Ikner faces two counts of first-degree murder and multiple counts of attempted first-degree murder for the attack that terrorized Florida’s capital city campus. Authorities say Ikner, who is the stepson of a local sheriff’s deputy, used his stepmother’s former service weapon to carry out the shooting. Prosecutors have announced their intention to seek the death penalty in the case.
The investigation raises complex legal questions about technological accountability. Legal experts note that proving criminal liability for an AI system would be unprecedented territory in American jurisprudence, as existing laws weren’t written with artificial intelligence in mind.
Tech industry analysts suggest the probe could have far-reaching implications for AI companies and their development practices. If successful, it could establish new precedents for holding technology companies responsible for how their products are used, potentially reshaping how AI systems are designed, trained, and monitored.
The political context of the investigation has also drawn attention. Uthmeier, a Republican appointed by Florida Governor Ron DeSantis, is currently running for election to the attorney general position in November. He took over the role after DeSantis appointed then-Attorney General Ashley Moody to fill Marco Rubio’s U.S. Senate seat when Rubio joined President Donald Trump’s administration as Secretary of State.
The investigation comes as states increasingly seek to establish their own regulatory frameworks around artificial intelligence in the absence of comprehensive federal legislation. Florida has positioned itself at the forefront of efforts to address potential harms from advanced technologies.
As the investigation proceeds, it will likely examine the specific nature of the interactions between Ikner and ChatGPT, what information was shared, and whether the AI system should have recognized warning signs or dangerous intent.
Legal experts suggest the case highlights the growing tension between technological innovation and public safety, as well as the challenge of determining where responsibility lies when increasingly autonomous systems are involved in real-world harm.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
This case raises important questions about the legal liability of AI companies, especially when their systems are used in harmful or unintended ways. The investigation will likely set an important precedent on this issue.
While the alleged connection between ChatGPT and the FSU shooting is deeply concerning, I caution against rushing to judgment. The investigation should focus on establishing the facts and determining the appropriate course of action, if any, for OpenAI.
This is a complex and sensitive issue that touches on the broader challenges of regulating emerging technologies like AI. I hope the investigation is conducted with rigor and transparency, and that any findings lead to constructive dialogue and policy solutions.
I’m curious to learn more about the specifics of how the gunman interacted with ChatGPT and what role, if any, the AI system may have played in the crime. The details will be crucial in determining the appropriate course of action.
This is certainly a concerning development. If ChatGPT is found to have any role in enabling or inciting the FSU shooting, that would be very troubling. AI companies need to have robust safeguards in place to prevent their systems from being misused for violent acts.
As an AI enthusiast, I’m quite concerned about the potential ramifications of this case. If ChatGPT is found culpable, it could have a chilling effect on the development and deployment of AI technologies. The bar for responsible AI development must remain high.
Investigating the potential liability of AI systems in criminal cases is a complex and uncharted territory. It will be interesting to see how this probe unfolds and what precedents it may set regarding the responsibilities of AI providers.
While I understand the need to hold AI companies accountable, I’m curious to know more about the specifics of how ChatGPT may have been involved. Did the gunman directly interact with the system, or were there other indirect connections? The investigation will need to carefully examine the evidence.