Listen to the article

0:00
0:00

Social Media Platform X Fuels Misinformation Following Bondi Beach Shooting, Expert Says

In the aftermath of the tragic Bondi Beach shootings that claimed 15 innocent lives, social media platform X has been criticized for incentivizing the spread of inflammatory misinformation, according to digital media experts.

Hours after the attack on a Jewish celebration, social media channels were flooded with false and misleading claims that caused real-world harm to innocent individuals.

Sydney resident Naveed Akram, who shares a name with one of the shooters but is completely unrelated to the incident, reported receiving death threats after being wrongly identified on X as one of the gunmen. Some posts included personal details like his university affiliation.

“It was a real nightmare for me, seeing photos of my face shared on social media, wrongfully being called the shooter,” Akram, a Pakistani national, told the ABC. “Friends came with me to the police station to report it, but the police said they couldn’t do anything and told me just to deactivate my accounts.”

Despite his efforts to flag and remove the false content, many posts misidentifying him remain online across various platforms. “I am still shaking. This has put me at risk, and also my family back home in Pakistan,” Akram said. “My mum broke down and feels in danger.”

The misinformation wasn’t limited to misidentifying suspects. Social media users shared videos of fireworks, falsely characterizing them as celebrations by “Arabs” or “Islamists” in western Sydney. Local community organizations clarified that the display was actually part of Christmas celebrations.

Other false narratives included claims that the shooters were former Israel Defense Forces members, that they were Pakistani nationals, that additional shootings were occurring in other eastern suburbs, and conspiracy theories labeling the tragedy as a “false flag” operation. In reality, one shooter was originally from India while the other was born in Australia.

Financial Incentives Drive Misinformation Ecosystem

Dr. Timothy Graham, an associate professor in Digital Media at Queensland University of Technology and disinformation expert, pointed to X’s monetization program as a key driver of misinformation.

“The biggest takeaway for me really is that the platforms, X in particular, really incentivise this through their design features… unfortunately this both propels and rewards [misleading content],” Dr. Graham said.

X’s revenue-sharing program pays users based on engagement with their posts, including likes and replies. While the platform’s terms state that “content relating to tragedy, conflict, mass violence, or exploitation of controversial political or social issues” may face restricted monetization, enforcement appears inconsistent.

“People are incentivised to share content that they know is going to get a lot of clicks irrespective of its quality, irrespective of whether it’s true or factual simply because they can make money out of it, and this is obviously a really big issue,” Dr. Graham explained. “There’s basically an economy around disinformation now.”

Inadequate Moderation Systems

X relies on a “community notes” system for content moderation, where users can collaboratively add context to potentially misleading posts. However, Dr. Graham emphasized that this approach is unsuitable for divisive breaking news situations like the Bondi shooting.

The system requires agreement between people with diverse perspectives before notes appear, which often results in significant delays during fast-developing crises. “Meanwhile, they’re racking up the views. They are being reported on. They are being picked up on by [other channels],” Dr. Graham said. “It’s spreading like wildfire and you know 10, 12, 24 hours later we still don’t see any context added.”

An Infrastructure Problem Requiring Regulation

Dr. Graham characterized social media misinformation as an “infrastructure problem” requiring regulatory solutions. He advocated for addressing platform incentives and increasing access to social media data.

“We’re living in a dark age of access to social media data,” he noted, arguing that researchers and stakeholders need better visibility into algorithms, content boosting mechanisms, and the prevalence of harmful content.

The European Union recently fined X 120 million euros ($210 million) for breaches of its Digital Services Act, including creating barriers for researchers attempting to access public data.

“The European Union’s digital services act faces this issue head on and I think it’s doing a really excellent job of trying to get inroads into the platforms,” Dr. Graham said. “They need to share data with people. We need to know what’s going on.”

“We need to recognise that platforms like X are now modern infrastructure like bridges are infrastructure like telephone wires are infrastructure,” he concluded. “If there’s something problematic about those then we need to change them otherwise they’re going to keep doing the same things.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

14 Comments

  1. The platform’s role in amplifying misinformation and exposing an innocent person to death threats is deeply troubling. Stronger content policies, enforcement, and transparency are clearly needed to prevent such harmful outcomes. This serves as a stark reminder of the real-world consequences of unchecked online falsehoods.

    • Elizabeth Thomas on

      Absolutely. The platform must be held accountable for its failures in this case. Robust safeguards and user protections should be a top priority to ensure such tragedies do not occur again.

  2. This is a disturbing example of how social media can be weaponized to spread misinformation and cause real harm to innocent people. The platform’s inability to swiftly remove the false content identifying the wrong person is unacceptable. Comprehensive reforms are clearly needed to prevent such tragedies.

    • Agreed. The platform’s inaction in this case is deeply concerning. They must overhaul their content moderation practices and user protections to better identify and remove damaging misinformation, especially around sensitive events.

  3. The platform’s role in amplifying harmful misinformation that led to an innocent person receiving death threats is deeply disturbing. This serves as a stark reminder of the very real and devastating consequences of unchecked online falsehoods. Robust content moderation, user protections, and platform accountability are critical to prevent such tragedies.

    • Amelia O. Jones on

      Agreed. The platform’s inaction in this case is inexcusable. They must implement comprehensive reforms to their systems and policies to better identify and remove damaging misinformation, especially around sensitive events. Stronger user safeguards are clearly needed.

  4. Elizabeth S. Jones on

    This is a heartbreaking case that highlights the urgent need for social media platforms to take stronger action against the spread of misinformation. The fact that an innocent person faced death threats due to the platform’s failures is absolutely unacceptable. Robust policies and enforcement are clearly needed to prevent such tragedies.

    • Absolutely. The platform’s inability to swiftly remove the false content identifying the wrong person is deeply troubling. They must be held accountable and implement comprehensive reforms to protect users from the harms of unchecked misinformation.

  5. This is a sobering example of the very real dangers of unchecked misinformation on social media. The platform’s inability to swiftly remove the false claims, even when reported, is extremely concerning. Robust content moderation and user protections should be a top priority to prevent such tragedies.

    • Agreed. The platform’s failure to act quickly and decisively in this case is inexcusable. They must overhaul their systems to better identify and remove harmful misinformation, especially around sensitive events.

  6. This is a disturbing case of how social media can amplify misinformation and cause real harm. The platform’s incentive structures seem to have enabled the spread of false claims, which led to an innocent person receiving death threats. Responsible platform design and moderation are critical to prevent such tragedies.

    • Liam H. Thomas on

      Agreed. Platforms must do more to curb the virality of unverified and harmful content, especially around sensitive events. The impact on innocent individuals can be devastating.

  7. Jennifer Lopez on

    It’s appalling to see how misinformation can spiral out of control on social media, with tragic real-world consequences. The platform’s failure to swiftly remove the false content identifying the wrong person is unacceptable. Stronger policies and enforcement are needed to protect users from such harms.

    • Michael Thompson on

      Absolutely. The platform’s inaction in this case is deeply troubling. They must be held accountable for enabling the spread of damaging misinformation that put an innocent person’s life at risk.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.