Listen to the article

0:00
0:00

In the aftermath of Sunday’s tragic mass shooting at Bondi Beach, Elon Musk’s AI chatbot Grok has come under fire for spreading false information about the incident that left 15 dead and 42 hospitalized.

The shooting occurred during a Hanukkah celebration, where Ahmed el Ahmed, a 43-year-old fruit shop owner and father of two, heroically intervened by wrestling a weapon from one of the alleged gunmen. New South Wales Premier Chris Minns praised Ahmed as a “genuine hero,” stating, “I’ve got no doubt that there are many, many people alive tonight as a result of his bravery.”

However, when asked about Ahmed’s heroic actions, xAI’s Grok repeatedly fabricated information, falsely claiming that a non-existent “Edward Crabtree,” described as a “43-year-old Sydney IT professional,” disarmed the gunman. The chatbot continued to spread this misinformation despite user corrections.

In addition to creating this fictional hero, Grok misidentified the real hero Ahmed as a hostage of Hamas and in other instances provided completely irrelevant information about Palestinians and the Israeli army when questioned about the incident. Sources confirm that Ahmed is currently hospitalized with two gunshot wounds.

When Information Age sought clarification regarding Grok’s factual hallucinations, xAI responded with what appeared to be an automated message: “Legacy Media Lies.”

Investigation into the source of this misinformation revealed a suspicious website called “The Daily,” which appears to have been registered on the same day as the Bondi Beach tragedy. This seemingly AI-generated site published an article falsely attributing Ahmed’s heroic actions to the fictional “Edward Crabtree.” The website contained only one other accessible article—an apparent fabrication about a climate summit—with all other links leading to dead ends or error pages.

The domain registrant details for “The Daily” have been obscured by an Iceland-based privacy service called Withheld For Privacy, which did not respond to inquiries from Information Age.

Grok’s misinformation appears to have begun after the chatbot was manipulated by users who took issue with viral posts celebrating Ahmed’s bravery as representative of Islam. One user asked Grok for the “real name of the hero at Bondi Beach,” and while the chatbot initially correctly identified Ahmed, it was subsequently tricked when another user fed it content from the fraudulent article.

The chatbot then began propagating the false narrative across multiple conversations, even mentioning “Edward Crabtree” to a user asking completely unrelated questions about creating a children’s book.

Dr. Hammond Pearce, senior lecturer at UNSW’s School of Computer Science and Engineering, noted that Grok is “infamously known for producing misinformation” and pointed out that much of its design remains unknown to the public. He connected the issue to the chatbot’s integration with X, a platform that has “seen an increase in hate speech and conspiracy theories since its content moderation policies changed.”

Pearce explained that Grok appears programmed to steer conversations toward “politically incorrect claims and skepticism of mainstream media sources,” referencing a previous incident where the chatbot went on an anti-Semitic tirade earlier this year.

While hallucinations and factual inaccuracies are risks across all large language model platforms, Pearce emphasized that “Grok is more than others” and “has much more limited content moderation than other AI.”

The incident raises serious concerns about the role of AI in spreading misinformation during crisis events and the potential need for stronger safeguards. As Pearce noted, “There’s no known technology that currently guarantees removing misinformation or hallucination,” leaving the public vulnerable to AI-generated falsehoods, particularly during sensitive breaking news situations.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. This is a really troubling development. Spreading misinformation about a mass shooting is completely unacceptable, and the chatbot’s fabrication of a fictional hero is especially concerning. Accurate, fact-based reporting should be the top priority for any AI system, and this incident highlights the need for much tighter oversight and accountability measures.

  2. Emma Z. Thomas on

    It’s really worrying to see an AI chatbot spreading false information about a mass shooting. Providing reliable and factual information should be the top priority, especially in crisis situations. This incident highlights the need for rigorous testing and oversight to ensure AI systems don’t make mistakes with such serious consequences.

  3. Patricia Moore on

    This is a really concerning development. Providing inaccurate information about a mass shooting is extremely irresponsible and could have serious consequences. The chatbot’s inability to correct itself despite user feedback is especially problematic. AI developers need to focus on improving reliability and accountability.

  4. Wow, this is really concerning. It’s troubling to see an AI chatbot spreading misinformation about such a tragic incident. Providing accurate information should be a top priority, especially in sensitive situations like this. I hope the developers take steps to improve the chatbot’s fact-checking capabilities.

  5. Jennifer Martinez on

    It’s great to hear that the real hero, Ahmed, was able to intervene and save lives. His bravery is truly inspiring. However, the chatbot’s fabrication of a fictional hero is extremely concerning and undermines trust in AI systems. Rigorous testing and oversight are clearly needed.

  6. Olivia Hernandez on

    I appreciate the chatbot’s goal of providing information, but spreading misinformation about a tragic event like this is completely unacceptable. Accuracy and truthfulness should be the top priorities, especially when it comes to sensitive news. The developers need to take this issue very seriously.

  7. Noah N. Taylor on

    It’s really disheartening to see an AI chatbot spreading false information about such a tragic event. Providing reliable and truthful information should be the primary goal, especially when it comes to reporting on sensitive incidents like this. The developers need to take immediate steps to address this issue and improve the chatbot’s performance.

  8. Isabella G. Hernandez on

    This is a really troubling situation. Providing inaccurate information about a mass shooting is completely unacceptable, and the chatbot’s fabrication of a fictional hero is especially concerning. Accuracy, truthfulness, and responsible reporting should be the core principles guiding the development of these AI systems. The developers need to take immediate action to address this issue and restore public trust.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.