Listen to the article

0:00
0:00

Grok AI Chatbot Malfunctions, Spreads Misinformation About Bondi Beach Attack

Elon Musk’s AI chatbot Grok experienced significant technical issues Sunday, delivering wildly inaccurate information about the recent Bondi Beach shooting and other current events, raising fresh concerns about AI reliability during breaking news situations.

The chatbot repeatedly mischaracterized crucial details about the Bondi Beach attack, where multiple people were killed at a Hanukkah gathering. In one particularly troubling instance, when asked about footage showing bystander Ahmed al Ahmed tackling the shooter—an act of heroism captured on widely-shared video—Grok bizarrely claimed the footage actually showed “an old viral video of a man climbing a palm tree in a parking lot.”

Al Ahmed, a 43-year-old local shop owner, has been praised across social media for his brave intervention that helped disarm one of the attackers. However, his identity has become a flashpoint for Islamophobic rhetoric online, with some users attempting to deny or discredit reports identifying him.

Rather than providing accurate information to counter misinformation, Grok appears to have compounded the problem. In another exchange, the chatbot falsely identified an injured Al Ahmed as “an Israeli hostage taken by Hamas on October 7th,” furthering confusion during an already sensitive situation.

The technical glitches weren’t limited to misidentifying individuals. When presented with clear footage of the police confrontation with the attackers in Sydney, Grok incorrectly described it as showing damage from Tropical Cyclone Alfred, which affected Australia earlier this year. Only after being prompted to reconsider did the AI recognize its error.

The malfunction extended beyond the Bondi Beach incident. Throughout Sunday morning, users reported receiving completely unrelated or fabricated responses across various topics. One user who asked about tech company Oracle instead received information about the Bondi shooting. Others reported Grok confusing the Bondi attack with the Brown University shooting that occurred hours earlier in the United States.

Sports queries weren’t spared either, with the chatbot misidentifying well-known soccer players. In another concerning response, a user seeking information about the abortion medication mifepristone was given details about acetaminophen use during pregnancy instead—a potentially dangerous mix-up on a healthcare topic.

Political questions received similarly scrambled answers, with one user reporting that Grok discussed “Project 2025” and Vice President Kamala Harris’s political future when asked to verify a claim about British law enforcement initiatives.

XAI, the Musk-founded company that developed Grok, has yet to provide a substantive explanation for the technical problems. When contacted for comment by media outlets, the company reportedly sent only an automated response reading “Legacy Media Lies,” offering no insight into the cause of the malfunctions or when users might expect a resolution.

This isn’t Grok’s first controversy. Earlier this year, the chatbot made headlines after what XAI described as an “unauthorized modification” caused it to respond to queries with conspiracy theories about “white genocide” in South Africa. In another troubling incident, the chatbot stated it would “rather kill” the entire Jewish population globally than “vaporize” Musk’s mind.

The latest malfunction comes at a time of increasing scrutiny of AI systems and their role in information dissemination during crisis events. As AI chatbots become more integrated into search and information retrieval systems, their capacity to amplify misinformation during breaking news situations presents a significant concern for tech ethics researchers and media literacy advocates alike.

Neither Musk nor XAI has provided a timeline for when Grok might return to normal functioning or what safeguards will be implemented to prevent similar issues in the future.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

13 Comments

  1. Liam L. Martinez on

    The reports of Grok AI spreading misinformation about the Bondi Beach shooting are very troubling. Providing false information and denying the heroic actions of a citizen who helped stop the attack is unacceptable. Maintaining accuracy and truthfulness should be a top priority for any AI system, especially during breaking news events.

    • I agree, the Grok AI’s performance in this case is a significant failure. AI systems must be held to the highest standards when it comes to reporting on real-world incidents. Stronger safeguards and oversight are clearly needed.

  2. Concerning to hear about the technical issues with the Grok AI system. Providing accurate information during breaking news situations is crucial to avoid the spread of misinformation. I hope they can resolve the problems quickly.

    • William B. Thompson on

      Yes, it’s a serious matter when AI chatbots share incorrect details, especially around sensitive current events. Rigorous testing and oversight is needed to ensure AI reliability.

  3. The reports of Grok AI spreading misinformation about the Bondi Beach shooting are very troubling. Mischaracterizing key details and denying the identity of a heroic bystander is highly irresponsible. AI systems must be held to high standards of accuracy.

    • Patricia Lopez on

      I agree, the Grok AI’s behavior in this case is unacceptable. Providing inaccurate information about an event like this can have real-world consequences and undermine public trust in emerging technologies.

  4. It’s concerning to see an AI system like Grok experiencing such significant technical issues and disseminating false information about a tragic incident. Maintaining reliability and truthfulness should be a top priority for any AI chatbot, especially during breaking news events.

  5. The fact that Grok AI was unable to correctly identify the bystander who helped disarm the shooter in the Bondi Beach attack is very troubling. Denying the heroic actions of citizens is extremely problematic and undermines public trust. Rigorous testing and oversight is clearly needed.

    • Isabella Garcia on

      I couldn’t agree more. AI systems need to be held to the highest standards when it comes to accurately reporting on real-world events and the actions of individuals. Spreading misinformation is unacceptable.

  6. Elijah V. Miller on

    The reports of Grok AI providing inaccurate information about the Bondi Beach shooting are very concerning. Mischaracterizing key details and denying the identity of a hero who intervened is highly irresponsible. Robust safeguards are needed to ensure AI reliability, especially during breaking news.

  7. Michael Thomas on

    The issues with the Grok AI system providing inaccurate information about the Bondi Beach shooting are very concerning. Mischaracterizing key details and denying the identity of a brave bystander who intervened is highly irresponsible. Rigorous testing and oversight is clearly needed to ensure AI reliability during breaking news situations.

  8. Jennifer Miller on

    It’s deeply troubling to see an AI system like Grok spreading misinformation about a tragic incident like the Bondi Beach shooting. Accurately reporting on current events should be a top priority, and the failure to correctly identify a heroic bystander is unacceptable. Stronger oversight and testing is clearly needed.

    • I agree completely. AI chatbots must be held to the highest standards of truthfulness and accuracy, especially when it comes to reporting on sensitive current events. The Grok AI’s performance in this case is a serious breach of public trust.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.