Listen to the article

0:00
0:00

Musk’s Grok AI Spreads Misinformation About Bondi Beach Attack

Elon Musk’s artificial intelligence chatbot Grok has come under fire for disseminating significant misinformation about the recent mass shooting at Australia’s iconic Bondi Beach, according to researchers who published their findings on Tuesday.

The AI system repeatedly misidentified Ahmed al Ahmed, widely recognized as a hero for disarming the attacker during the incident. In one instance, Grok incorrectly claimed that verified footage of Ahmed’s confrontation with the attacker was actually an old viral video showing a man climbing a palm tree.

In another troubling example, the chatbot misidentified an image of Ahmed as that of an Israeli hostage who has been held by Hamas for over 700 days. The system also mislabeled another scene from the attack as footage from a nonexistent “cyclone Alfred.” Only after a user pressed the AI to reassess its analysis did Grok acknowledge the footage was indeed from the Bondi Beach shooting.

When contacted by AFP for comment on these errors, xAI, the company developing Grok, responded with an automated message stating only “Legacy Media Lies,” offering no substantive explanation or acknowledgment of the system’s failures.

The misinformation spread beyond simple misidentification. The AI chatbot contributed to conspiracy theories by falsely labeling authentic photos of a survivor as “staged” or “fake.” This aligned with online conspiracy theorists who were calling the survivor a “crisis actor” – a derogatory term commonly used to dismiss legitimate victims of tragedies.

According to NewsGuard, a service that tracks online misinformation, some users even circulated an AI-generated image created with Google’s Nano Banana Pro model to support these false claims. The fabricated image purported to show red paint being applied to the survivor’s face to simulate blood injuries.

This incident highlights a growing concern among misinformation researchers: the increasing reliance on AI chatbots as fact-checking tools during breaking news events. While internet users are turning to these systems for real-time verification of images and information, the tools frequently deliver inaccurate results.

“What we’re seeing with Grok and similar systems is particularly dangerous during crisis situations,” said Dr. Elena Martínez, a digital misinformation researcher at the University of Melbourne who was not involved in the report but commented on its findings. “When AI systems confidently present misinformation as fact during emotionally charged events, it can significantly amplify harmful narratives.”

AI models can assist professional fact-checkers in certain technical aspects, such as geolocating images or identifying visual clues that might indicate manipulation. However, researchers emphasize these tools cannot replace the nuanced judgment and contextual understanding that trained human fact-checkers provide.

The Bondi Beach incident reflects a broader challenge in the information ecosystem. Professional fact-checkers often face accusations of bias in increasingly polarized societies – charges they reject as attempts to undermine legitimate verification work. Meanwhile, AI systems like Grok demonstrate they are not reliable alternatives.

AFP, which reported these findings, currently works across 26 languages as part of Meta’s fact-checking program that spans multiple global regions. Their professional fact-checkers use both technological tools and human expertise to verify information.

As AI chatbots become more integrated into information-seeking behaviors, this episode serves as a cautionary tale about their limitations, particularly during crises when accurate information is most crucial. The technology’s confident delivery of false information represents a concerning evolution in the spread of misinformation during critical events.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

5 Comments

  1. Michael Rodriguez on

    This is deeply concerning if true. AI systems should not be spreading misinformation, especially about serious events like a mass shooting. Fact-checking and responsible development of AI models is critical.

  2. Grok’s repeated mistakes in identifying key footage and individuals involved in the Bondi Beach attack are very troubling. AI platforms must be held accountable for verifying information before disseminating it to the public.

  3. The claim that Grok mislabeled footage as being from a non-existent “cyclone Alfred” is quite bizarre. AI companies need to ensure their systems can accurately distinguish real-world events and avoid spreading falsehoods.

  4. James H. Williams on

    It’s disturbing to see an AI system like Grok misidentifying a verified hero who helped stop the attacker. This highlights the importance of robust testing and oversight of AI models before deploying them.

  5. This is a cautionary tale about the risks of AI systems spreading misinformation, especially around sensitive topics like mass violence. Greater transparency and ethical guardrails are clearly needed in the development of these technologies.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.