Listen to the article
AI-Generated Images and Misinformation Spread After Bondi Beach Attack
Falsehoods and conspiracy theories are proliferating online in the aftermath of the Bondi Beach attack, with AI-generated images being weaponized to distort the narrative surrounding the tragic event.
ABC NEWS Verify has identified three instances where artificial intelligence-created images—and in one case, a real image taken out of context—are being used to spread misinformation across social media platforms.
One particularly concerning example involves an AI-generated image purporting to show alleged gunman Naveed Akram sitting at an outdoor café in the Philippines with the defense attaché of India to the Philippines, Captain Chandra Kant Kothari. Posts on X (formerly Twitter) have used this fabricated image to suggest collusion between the alleged attacker and Indian officials.
Analysis by ABC NEWS Verify confirmed the image was created using Google’s AI tools, as it returns a positive result for the company’s invisible watermark when tested in its Synth ID detector. Visual examination reveals multiple telltale signs of AI generation, including physical and logical inconsistencies such as garbled text on a chicken bucket, an indoor desk lamp inexplicably placed on an outdoor table, and a blurred background with cars impossibly rendered on top of each other.
The timing of this fake image appears calculated. Earlier this week, ABC News revealed that the alleged gunmen, Sajid and Naveed Akram, had traveled to the Philippines for “military-style training” in the month before the attack. Social media users have exploited this coincidence, falsely claiming the image proves contact between Indian authorities and the alleged attackers before the incident—when in fact, the depicted scene never occurred.
In another disturbing example of digital deception, false narratives about the gunman’s identity continue to circulate widely. “Leaked” screenshots of a fake Facebook profile for a “David Cohen”—featuring AI-manipulated photos of Naveed Akram wearing a Jewish yarmulke and attending a bar mitzvah—have garnered over two million views across various platforms.
Despite clear signs of inauthenticity, numerous social media users—many based outside Australia, according to platform transparency tools—have shared these fabricated screenshots as supposed evidence that the alleged gunman is Jewish. This falsehood feeds into antisemitic conspiracy theories that suggest an Israeli connection to the attack without any factual basis.
A pro-Palestine Instagram account initially identified as the source of some widely-shared versions has since acknowledged the deception, apologizing for sharing what they admitted was an “AI-generated” image.
The online misinformation campaign has also targeted female police officers who responded to the attack. Multiple social media accounts have posted misogynistic attacks directed at these officers, using misleading images to suggest they were hiding from the alleged gunmen rather than taking appropriate cover during the active shooter situation.
Verified video footage of the incident tells a very different story. One recording shows a female officer wearing a cap being directly engaged by the younger alleged gunman while attempting to protect two bystanders behind the cover of a car. This officer appears to be the first to approach the footbridge after Naveed Akram was injured and the attack ended.
Another female officer, filmed approximately 40 meters away during the attack, arrived shortly after. Both officers helped prevent confrontations between bystanders at the crime scene—despite having come under direct fire moments earlier. One secured the scene while the other removed one of the weapons used in the attack.
Police Commissioner Mal Lanyon has condemned the misleading use of images to spread disinformation about the police response. “That type of misinformation, that type of taking situations out of context, is incredibly harmful,” he told radio station 2GB, while commending the actions of the female police officers.
The proliferation of AI-generated imagery presents a growing challenge for authorities and media organizations attempting to combat misinformation during crisis events, highlighting the increasingly sophisticated nature of digital deception in the aftermath of violent incidents.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
The use of AI-generated images to spread misinformation about this tragic event is deeply concerning. Synthetic media can be incredibly convincing, posing a serious threat to public understanding and trust. We must prioritize media literacy and fact-checking to ensure the integrity of information circulating online.
This is a disturbing example of how AI-powered fakery can be exploited to sow confusion and undermine the facts. The proliferation of synthetic media is a serious challenge that requires vigilance and robust verification processes to combat. We must be extremely cautious about the credibility of online visuals.
The use of AI to create false images that distort the narrative around this tragic event is deeply concerning. We must be extremely cautious about the credibility of visual content online, as manipulated media can be incredibly convincing.
Agreed. The potential for AI-generated visuals to mislead and confuse the public is worrying. Robust fact-checking and digital authentication methods will be key to curbing the damage from this type of synthetic media.
The proliferation of AI-generated images being used to spread misinformation is a worrying trend. Synthetic media can be incredibly convincing, making it essential that we remain vigilant about the credibility of online content, especially around sensitive events. Fact-checking and media literacy are key to combating this threat.
It’s alarming to see AI-generated visuals being used to distort the narrative around this attack. The potential for manipulated media to mislead the public is a major challenge that requires concerted efforts to address. Robust verification and fact-checking will be crucial in the fight against online disinformation.
It’s appalling to see AI-generated images being weaponized to spread misinformation about this attack. The ability of synthetic media to distort the truth is a major threat to public discourse and trust. We must be diligent in verifying the origins and integrity of visual content.
This is really troubling to see AI-generated images being used to spread disinformation. Fabricated visuals can be so convincing and damaging, undermining trust in the truth. We need to be vigilant about verifying the authenticity of online content, especially around sensitive events.
Absolutely. The proliferation of synthetic media is a major challenge for combating the spread of misinformation. Fact-checking and media literacy will be crucial going forward.