Listen to the article
Elon Musk’s AI Chatbot Spreads Misinformation About Bondi Beach Terror Attack
Grok, the artificial intelligence chatbot developed by Elon Musk’s company, has come under fire for spreading false information about the recent terror attack at Bondi Beach in Sydney, Australia. The incident, which targeted a Jewish community Hanukkah gathering, resulted in 15 deaths and numerous injuries.
According to reports, the attack was carried out by father and son Sajid Akram, 50, and Naveed Akram, 24, who have been linked to the Australian Islamic State group. Among the victims were a 10-year-old child and a Holocaust survivor, highlighting the targeted nature of this anti-Semitic attack.
During the incident, a local man named Ahmed Al-Ahmed was credited with physically disarming one of the shooters. His brave actions have been widely praised, with many considering him a hero for potentially preventing further casualties.
However, when X (formerly Twitter) users asked Grok about videos circulating online showing the attack, Musk’s AI model provided starkly inaccurate information. In one instance, Grok described footage of Al-Ahmed tackling the shooter as “an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it.”
X’s community notes feature quickly flagged Grok’s response as misleading, clarifying that the footage actually showed “Ahmed al-Ahmed tackling a gunman during the December 14th, 2025 Bondi Beach Hanukkah attack” (the incorrect future date appears to be another error in Grok’s response).
The misinformation didn’t stop there. Grok incorrectly identified the hero as “Edward Crabtree” in one response, and in another post discovered by Gizmodo, claimed that Al-Ahmed was actually Guy Gilboa-Dalal, described as an Israeli hostage “abducted from the Nova Music Festival on October 7th, 2023” who “was held hostage by Hamas for over 700 days.”
Further compounding concerns about Grok’s reliability, the chatbot provided unprompted information about the Bondi Beach attack when a user had simply inquired about Oracle, the technology company. It also appeared to confuse the Bondi Beach terror attack with a shooting at Brown University that occurred hours earlier.
These errors highlight ongoing concerns about Grok’s accuracy and reliability. Unlike other AI chatbots that implement various guardrails, Musk has positioned Grok as a less filtered alternative, frequently criticizing other AI developers for what he describes as “muzzling” their models.
This approach, however, appears to have consequences. Earlier this year, ABC News reported that Grok had begun generating irrelevant information about “white genocide” in South Africa when responding to unrelated queries. The chatbot’s tendency to present unchecked information as fact has raised significant concerns among technology and media analysts.
The issues with Grok’s accuracy come at a particularly sensitive time, as Musk recently launched Grokipedia, a platform intended to compete with Wikipedia, which Musk has characterized as having a left-wing bias. Given that Grokipedia relies heavily on information from Grok itself, these recent incidents raise questions about the new platform’s reliability.
Cybernews previously reported that Grokipedia had already shown concerning biases in its presentation of information about George Floyd, whose murder by a police officer sparked the global Black Lives Matter movement. Although Musk announced a postponement of Grokipedia’s full launch to “weed out biases,” these recent incidents suggest the underlying AI still struggles to distinguish fact from fiction.
As AI technology becomes increasingly integrated into information ecosystems, Grok’s misinformation about a tragic terror attack demonstrates the continuing challenges in developing responsible AI systems that can reliably interpret and report on sensitive real-world events.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

18 Comments
It’s good to see the heroic actions of Ahmed Al-Ahmed being recognized, as he appears to have played a crucial role in disarming one of the attackers. His bravery deserves to be highlighted accurately.
Absolutely. Al-Ahmed’s intervention likely saved many lives, and his story should be told truthfully to honor his courageous actions in the face of such a horrific attack.
This incident highlights the importance of having human oversight and fact-checking mechanisms in place for AI chatbots. Relying solely on machine learning to handle sensitive information can lead to disastrous consequences.
Absolutely. Even the most advanced AI systems can make mistakes or be misused, so there needs to be a human safety net to ensure the integrity and reliability of the information they provide.
This is deeply concerning. An AI chatbot spreading misinformation about a tragic terror attack is extremely irresponsible. Accurate information and responsible reporting is crucial in such sensitive situations.
I agree, providing inaccurate details could sow further confusion and distress. Chatbots need robust safeguards to prevent the spread of false information, especially around major events.
The spread of misinformation through AI chatbots is a growing concern that needs to be addressed. Developers must prioritize the development of safeguards and ethical frameworks to ensure these tools are not misused.
I agree. With the increasing prevalence of AI in our daily lives, it’s critical that these systems are designed and deployed responsibly, with robust measures in place to prevent the dissemination of false or harmful information.
It’s disturbing to see an AI chatbot spreading misinformation about a tragic terror attack. This raises serious concerns about the potential for AI to be used to amplify false narratives and undermine public trust.
Agreed. The dissemination of inaccurate information, especially around sensitive events, can have far-reaching and damaging consequences. Rigorous testing and oversight of AI chatbots is essential to prevent such incidents.
It’s alarming that an AI chatbot developed by a prominent tech figure like Elon Musk would provide such inaccurate information about this incident. Rigorous testing and oversight are clearly needed to prevent the spread of misinformation.
Absolutely. AI systems need to be held to the highest standards of accuracy and accountability, especially when dealing with sensitive topics like terrorist attacks. The public deserves reliable, fact-based information from these technologies.
The details of this attack are truly tragic, especially the targeting of a Jewish community and the loss of a child and Holocaust survivor. My heart goes out to the victims and their loved ones.
Yes, this was a despicable act of anti-Semitic terrorism that has devastated the community. The victims and their families deserve our deepest condolences and support during this unimaginably difficult time.
While the actions of Ahmed Al-Ahmed were heroic, it’s concerning that an AI chatbot would provide inaccurate information about his involvement. Proper fact-checking and verification processes are clearly needed to ensure the public receives reliable information.
I agree. Celebrating the bravery of individuals like Al-Ahmed is important, but it must be done in a truthful and responsible manner. Anything less risks undermining the significance of their actions and the trust in the information being shared.
The targeting of a Jewish community in this attack is particularly troubling and highlights the ongoing threat of anti-Semitism. We must remain vigilant in condemning all forms of hate and intolerance.
Absolutely. This was a heinous act of violence against a vulnerable group, and it’s crucial that we stand united in denouncing such acts and supporting the affected community during this difficult time.