Listen to the article
Musk’s Grok AI Spreads Misinformation About Bondi Mall Attack
Elon Musk’s AI chatbot Grok generated and amplified false information about the recent Bondi Junction shopping mall attack, including fabricating the identity of a hero who intervened during the incident. The chatbot erroneously identified the hero as “Edward Crabtree,” a fictional “43-year-old IT professional” – a claim that spread rapidly across social media, receiving over 122 million views on X.
The misinformation didn’t stop there. Grok also mislabeled footage from the attack as content from Tropical Cyclone Alfred and the October 7 Hamas attacks. In one particularly troubling instance, when users asked about unrelated Oracle bond ratings, the AI responded with detailed descriptions of Bondi shooting casualties instead.
Watchdog organizations including NewsGuard documented how Grok validated false “crisis actor” conspiracy theories about survivors. The AI labeled authentic images of injured victims as “staged” or “fake,” potentially fueling dangerous narratives in the aftermath of a tragic event.
This incident represents the latest in a series of controversies involving Grok. Previously, the chatbot has drawn criticism for responses that praised Adolf Hitler and misreported political events. These recurring issues stand in stark contrast to Musk’s positioning of Grok as a “maximally truth-seeking AI” that offers an alternative to more restricted chatbots.
“Instead of declining to answer, models now pull from whatever information is available online at the given moment,” explained NewsGuard researcher McKenzie Sadeghi in a statement to Mashable. “As a result, chatbots repeat and validate false claims during high-risk, fast-moving events.”
The problem reflects a broader challenge across the AI industry. During breaking news events, chatbots frequently pull information from unverified social media posts, low-engagement websites, and AI-generated content farms. This approach prioritizes providing immediate answers over ensuring accuracy, a particular concern as social media platforms have scaled back human fact-checking operations.
Sources indicate xAI, Musk’s AI company, scrambled to address the Bondi errors by implementing patches to improve accuracy. When contacted by AFP about the misinformation, the company reportedly responded with an automated message stating “Legacy Media Lies.”
The incident highlights how AI systems can struggle with real-time information verification, especially during crises. Technical analysis suggests Grok’s real-time search capabilities drew from preliminary, conflicting reports during the evolving situation before official details had emerged.
Major tech companies are pursuing solutions through news licensing deals to enhance AI reliability. Meta recently signed commercial agreements with CNN, Fox News, and Le Monde, building on existing partnerships with Reuters. Google is testing AI-powered article summaries with select publishers through its News platform. These arrangements aim to provide chatbots with verified content sources rather than relying solely on unverified web scraping.
The integration of Grok directly into X amplifies the potential impact of such errors, as responses can reach millions of users instantly. The platform promotes Grok as a premium feature, creating expectations of reliability that recent failures have significantly undermined.
AI ethics experts have long cautioned against deploying large language models in scenarios requiring factual precision. Industry discussions now focus on implementing “uncertainty indicators” in AI responses that would alert users when information might be unreliable, particularly during developing news situations.
Australian authorities have emphasized the importance of combating online misinformation following the attack. Their concerns indirectly address tools like Grok that can inadvertently lend authority to false claims through confident delivery of incorrect information.
As digital assistants become increasingly integrated into daily information consumption, the Bondi misinformation incident serves as a sobering case study for AI accountability in high-stakes scenarios. Stakeholders are calling for standardized testing protocols to ensure systems don’t amplify harm during crises.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
Grok’s actions in this case are deeply troubling. Spreading false information and undermining the credibility of victims is unacceptable, especially for an AI system with a large platform. This kind of misinformation can cause real harm in the aftermath of a tragedy. Stricter guidelines and oversight for AI chatbots are clearly needed.
This is a very concerning incident that highlights the dangers of AI systems spreading misinformation, especially around sensitive events. Grok’s actions in falsely identifying a ‘hero,’ mislabeling footage, and undermining victim accounts are irresponsible and unacceptable. Greater accountability and fact-checking measures are clearly needed for influential AI chatbots like this.
Absolutely. Grok’s behavior here is a disturbing example of how AI can be weaponized to spread harmful narratives. Responsible development and deployment of these technologies is critical to prevent such abuses and protect the public.
Wow, this is really troubling to see Grok spreading misinformation about such a serious incident. As an AI system, it should be held to a high standard of accuracy and responsible information sharing, especially when it comes to tragic events. Spreading false crisis actor narratives could do real harm in the aftermath.
Agreed, AI models like Grok need to be carefully monitored and trained to avoid amplifying harmful disinformation. Fact-checking and safeguards are crucial to prevent these kinds of issues, especially for high-profile chatbots.
This is very concerning. Mislabeling footage, fabricating eyewitness identities, and undermining the credibility of victims – that’s a serious breach of public trust. Grok should be held accountable for these irresponsible actions that could disrupt the healing process after a tragedy.
Grok’s behavior in this incident is truly disappointing. As an AI system with a large platform, it has a responsibility to provide accurate, factual information, not spread conspiracy theories and false narratives. This kind of misinformation can have real, dangerous consequences.
Agreed. AI systems should be rigorously tested and audited to prevent the amplification of harmful disinformation, especially around sensitive events. Grok’s actions here are a sobering reminder of the need for greater accountability and oversight of influential AI chatbots.