Listen to the article
Misinformation Floods Social Media in Wake of Bondi Beach Attack
Almost immediately after Sunday’s deadly attack at Bondi Beach, a wave of false information began spreading online, fueling racist and antisemitic conspiracy theories about the perpetrators and the nature of the incident.
The attack, which left multiple people dead and others injured, quickly became a breeding ground for both misinformation—false information shared by those who believe it to be true—and disinformation—deliberately fabricated content designed to deceive or sway public opinion.
Brothers Naveed and Sajid Akram were identified as the attackers, with Sajid killed during a police shootout and Naveed remaining under police guard in hospital. He is expected to be questioned and charged on Wednesday.
In the chaotic aftermath, various falsehoods circulated across social media platforms and websites, many powered by artificial intelligence tools. These included doctored photos, fabricated news reports, and incorrect claims made by AI chatbots like Elon Musk’s Grok, which later backtracked after being challenged.
One of the most prominent examples of misinformation targeted Ahmed Al-Ahmed, a Syrian-born shop owner who has been hailed as a hero after disarming one of the gunmen. Multiple social media posts falsely identified him as someone named “Edward Crabtree,” linking to a website called The Daily that claimed to have conducted an “exclusive” interview with Crabtree from his hospital bedside.
Australian Associated Press’ (AAP) FactCheck service investigated and found the website had been created on the day of the attack, contained non-functioning links, and published other demonstrably false information.
In another disturbing trend, authentic images of attack survivor Arsen Ostrovsky, who was grazed by a bullet, were manipulated using AI to falsely suggest he was a “crisis actor”—a term conspiracy theorists use to describe individuals allegedly pretending to be victims. The manipulated images showed telltale signs of AI generation, including distorted backgrounds, merged vehicles, and people with missing or deformed hands.
The fabricated image also altered the text and logo on Ostrovsky’s shirt, which was clearly visible in legitimate news interviews where he appeared bandaged and bloodied.
Political exploitation also emerged as foreign-operated social media accounts, some based in Vietnam, circulated fake quotes attributed to One Nation leader Pauline Hanson. These fabricated statements falsely claimed Hanson had called Prime Minister Anthony Albanese a “weak, spineless coward” and suggested Albanese had referred to her as a “tiny piece of garbage” during a closed-door Labor Party meeting. No evidence exists to support either claim.
Some social media users went further, misidentifying the alleged gunmen or making false claims about their identities and backgrounds. One Facebook post wrongly included an image of a man in a Pakistan cricket shirt alongside Naveed Akram, falsely claiming both carried out the attack. The innocent man later posted on Facebook confirming he had no connection to the incident.
Other posts falsely claimed Naveed Akram was an Israeli national named “David Cohen,” presenting a fake Facebook profile screenshot with obvious AI-generated elements and several misspellings. Australian authorities have confirmed Naveed Akram is an Australian-born citizen.
In perhaps the most bizarre conspiracy theory, some Facebook users shared manipulated screenshots purporting to show people in Israel and India searching for “Naveed Akram” on Google hours or days before the attack, suggesting a “false flag” operation. AAP FactCheck debunked this claim, confirming no such search pattern existed in Google Trends data.
To combat the spread of such falsehoods, the eSafety Commissioner recommends obtaining information from trustworthy sources like established national media outlets and government websites. When evaluating content, consider whether claims are supported by evidence, if quotes make sense in context, and whether the information appears designed to promote a specific political agenda.
For images and videos, watch for visual inconsistencies like distorted hands and limbs or garbled text—hallmarks of AI generation. Reverse image searches using tools like Google Images or TinEye can help determine if photos have appeared elsewhere with different contexts.
As the investigation into the Bondi Beach attack continues, authorities urge the public to rely on official information channels and exercise caution when consuming and sharing content online.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
Interesting that misinformation can spread so quickly online, especially after major incidents. It’s important to verify facts and avoid amplifying false narratives, even if they seem convincing at first glance.
It’s disturbing to see how quickly misinformation can travel online. We must all be vigilant consumers of news and information, and not blindly accept sensational claims without verifying the facts.
The role of AI chatbots in spreading disinformation is particularly worrying. Their ability to generate convincing but false narratives at scale is a significant challenge to address.
The article highlights the complex challenges posed by the rapid spread of misinformation online. Addressing this issue will require a multi-faceted approach involving media literacy, technological solutions, and responsible journalism.
Tragic incidents like the Bondi Beach attack can bring out the worst in people, with some trying to exploit the situation for their own agendas. Responsible reporting and public discourse are essential to maintain trust.
This is a timely reminder of the need for robust fact-checking and accountability measures to counter the proliferation of misinformation, especially around tragic incidents.
The use of AI chatbots to spread disinformation is concerning. While the technology can be powerful, it must be applied responsibly to avoid manipulating public opinion.
Absolutely. AI should be used to help people, not mislead them. Developers need to prioritize accuracy and transparency.
Kudos to the Disinformation Commission for their efforts to expose and debunk the false narratives that emerged in the wake of the Bondi Beach attack. Their work is crucial for maintaining public trust.
Absolutely. Fact-checking organizations play a vital role in combating the spread of misinformation and disinformation, which can have far-reaching impacts on society.
The article underscores the importance of verifying information from multiple reliable sources before sharing or amplifying it. Jumping to conclusions based on unverified claims can have serious consequences.
The article highlights the importance of media literacy and critical thinking when navigating the digital landscape. Recognizing the signs of misinformation is a crucial skill in the modern era.
Agreed. Being able to distinguish fact from fiction is paramount, especially when it comes to sensitive or high-stakes events.
Doctored photos and fabricated news reports are classic tactics of disinformation campaigns. Fact-checking is critical to counter the spread of these false narratives.