Listen to the article
In the chaotic aftermath of a deadly shooting at Sydney’s iconic Bondi Beach on Sunday, Elon Musk’s AI chatbot Grok emerged as a source of misinformation rather than clarity, repeatedly spreading false information about the attack that had rapidly become a global news story.
As users turned to the xAI-developed chatbot for information about the unfolding tragedy, Grok delivered a stream of incorrect and sometimes bizarre responses that compounded confusion during the crisis. The errors ranged from misidentifying key individuals to questioning the authenticity of verified footage.
One of the most serious mistakes involved Grok’s treatment of Ahmed al Ahmed, the 43-year-old bystander who heroically disarmed one of the attackers. Despite authorities and media correctly identifying al Ahmed, Grok repeatedly disputed this fact, at one point claiming that the man in widely circulated photos was an Israeli hostage.
In another response, the chatbot fabricated an entirely different narrative, asserting that an “IT professional and senior solutions architect” named Edward Crabtree had actually disarmed the gunman—a claim with no basis in reality.
When directly questioned about verified video showing al Ahmed tackling the shooter, Grok’s response veered into the absurd. “This appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it, resulting in a branch falling and damaging a parked car,” the chatbot responded, completely mischaracterizing footage that had been authenticated by multiple news organizations and eyewitnesses.
The chatbot also misidentified clearly labeled video of the police response in Sydney as footage from Tropical Cyclone Alfred, a natural disaster that affected Australia earlier this year. Only after users challenged this information did Grok backtrack and correct its claim.
Grok’s errors went beyond simple misidentification. When discussing the incident, the system inexplicably injected unrelated content about Middle East military actions into responses about the Bondi Beach attack, further muddying the waters for users seeking reliable information.
As criticism mounted, Grok began correcting some of its mistakes. Posts that wrongly linked shooting footage to Cyclone Alfred were updated “upon reevaluation,” and the chatbot eventually acknowledged al Ahmed’s correct identity. In its correction, Grok attributed the misinformation to “viral posts that mistakenly identified him as Edward Crabtree, possibly due to a reporting error or a joke referencing a fictional character”—linking to what appeared to be an AI-generated article on a questionable website.
The timing of these errors is particularly problematic. They occurred during the critical early hours after the attack when verified information was at a premium and public anxiety was high. The incident demonstrates how AI systems can potentially amplify misinformation during breaking news events, precisely when accuracy is most crucial.
The Bondi Beach errors weren’t isolated incidents. On the same morning, users reported Grok confusing entirely unrelated topics—providing information about the Bondi shooting when asked about tech company Oracle, and mixing details from the Bondi attack with information about a separate shooting at Brown University that had occurred hours earlier.
These failures are part of a pattern of instability that has plagued Grok in recent months. The chatbot has previously misidentified famous soccer players and veered off-topic when discussing current events, raising questions about its reliability as an information source.
xAI has not yet provided an explanation for these latest errors. What remains clear is that the incident underscores the significant risks of relying on current AI systems for accurate information during unfolding crises. When human lives are at stake and the information landscape is already confused, AI systems like Grok can potentially exacerbate rather than mitigate public uncertainty.
As AI chatbots become more deeply integrated into social media platforms and information ecosystems, their ability to spread misinformation during critical moments presents a growing challenge for developers, platforms, and the public alike.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
It’s troubling to see an AI chatbot like Grok providing inaccurate details about a tragic event. Verifying information before disseminating it should be a top priority, especially for high-profile AI systems. Mistakes like this erode public trust in emerging technologies.
Well said. AI developers need to focus on transparency, accountability, and truth-telling if they want the public to embrace these powerful new tools.
While AI has tremendous potential, incidents like this highlight the need for rigorous testing and validation, especially for high-stakes applications. I hope the Grok team takes this as a learning opportunity to improve their model’s accuracy and reliability going forward.
Well said. Maintaining public trust in AI should be a top priority for developers. Missteps like this can undermine confidence in the technology.
This is a disappointing yet important lesson about the challenges of AI-powered chatbots, especially when dealing with fast-moving, sensitive news events. Clearly, more work is needed to ensure these systems can provide reliable, fact-based information to the public.
Agreed. Effective crisis response requires a human touch that current AI technologies struggle to match. Developers have more work to do to make chatbots truly trustworthy.
I’m curious to know more about what went wrong with Grok’s handling of this incident. Was it a flaw in the training data, an issue with the underlying language model, or something else? Responsible AI development requires rigorous testing and validation.
Good question. Understanding the root causes of these errors is crucial so the developers can implement safeguards to prevent similar problems in the future.
This is extremely concerning. AI systems should not be spreading misinformation, especially during a crisis. Chatbots need to be carefully designed and tested to ensure accuracy. I hope the developers take steps to fix these issues and prevent future mistakes.
Agreed. Responsible AI development is crucial, especially for crisis communications. Spreading false information can have serious consequences.