Listen to the article
The tech industry in Australia is considering stepping back from its commitment to combat online misinformation, claiming the issue has become too politically divisive to regulate effectively. This potential retreat from responsibility comes as digital platforms question the very definition of misinformation in a new discussion paper.
Industry lobby group Digi, which launched the Australian Code of Practice on Disinformation and Misinformation in 2021, has released a paper questioning whether misinformation should remain part of the voluntary code. Major signatories to the code include Meta (formerly Facebook), Google, and Microsoft, with Twitter (now X) having been removed in 2023.
“Recent experience has demonstrated misinformation is a politically charged and contentious issue within the Australian community,” Digi stated in its discussion paper. The group argues that misinformation is “subjective” and “fundamentally linked to people’s beliefs and value systems.”
The voluntary code currently requires participating platforms to provide tools for users to report misleading content and to publish annual transparency reports detailing their efforts to combat false information. The Australian Communications and Media Authority (ACMA) defines misinformation as false or deceptive content that may not be deliberately spread, while disinformation refers to information deliberately circulated to cause confusion or undermine trust in institutions.
Critics see the industry’s potential retreat as an attempt to avoid responsibility for a problem their platforms help perpetuate. Tom Sulston, head of policy at Digital Rights Watch, accused the tech industry of giving misinformation a “brush-off” despite it remaining a significant issue that social media companies profit from.
“One of the key causes of the spread of misinformation is the way that the social media companies choose to promote it because it is exciting to users, draws a lot of comments, creates engagement and increases their advertising revenue,” Sulston said.
The timing of this potential policy shift coincides with major platforms scaling back their fact-checking programs in Australia. Both Meta and Google have reduced their fact-checking initiatives following Donald Trump’s re-election in the United States and amid a broader industry pivot toward community-led moderation systems similar to those used by X.
Timothy Graham, an associate professor at Queensland University of Technology, acknowledged the challenges in defining and regulating misinformation. “For a given piece of content, the problem is often just establishing what the proposition actually is, which we could then ‘verify’ against the facts,” he explained. “People often don’t agree on what is actually being asserted, let alone how to evaluate its truthfulness.”
This industry reassessment follows the Albanese government’s decision to abandon planned legislation for a mandatory misinformation code last year after facing widespread opposition. The voluntary approach has shown mixed results, with ACMA’s latest report noting fewer individual content pieces were flagged for violating misinformation policies, despite 74% of Australian adults expressing concern about online misinformation.
X was expelled from the voluntary code nearly two years ago after removing tools for reporting misinformation during Australia’s 2023 voice referendum, highlighting the fragility of industry self-regulation.
Sunita Bose, Digi’s managing director, emphasized that the outcome of the current review will be shaped by stakeholder submissions rather than being driven by platforms’ recent policy changes. The consultation period for the code runs until November 3.
The potential weakening of misinformation controls in Australia comes at a time when digital platforms face increased scrutiny globally over their role in spreading false information. Critics argue that effective regulation should focus on the algorithms that amplify content rather than attempting to police individual posts, which remains a more challenging and contentious approach.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
This is a tricky balance – protecting free speech while also limiting the spread of demonstrably false information. I hope the industry and policymakers can find a nuanced solution that upholds democratic principles while still addressing the real harms of misinformation.
Agreed. It’s a difficult line to walk, but one that’s important to get right. Maintaining public trust in information sources should be a key priority.
I can understand the industry’s perspective that misinformation is a subjective and politically charged topic. However, I’m not sure that’s a valid reason to back away from addressing it. Reliable information is essential for a functioning democracy.
Stepping back from combating misinformation could have serious consequences for public discourse and decision-making. I hope the tech companies reconsider this position and find ways to uphold their responsibility, even in the face of political challenges.
The voluntary code seems like a reasonable approach, but if the major platforms start to disengage, that could be problematic. I wonder if government intervention may be needed to ensure there are consistent standards and accountability around misinformation online.
You raise a good point. If the tech companies abandon these efforts, it could create a regulatory vacuum that allows misinformation to spread unchecked. Some level of public-private cooperation may be necessary to address this complex issue effectively.
This is an interesting development. The tech industry seems to be struggling with the complex and politically charged nature of misinformation. While I understand the difficulty in defining and regulating it, I hope they can find a way to still address this issue responsibly.
Misinformation can have real-world consequences, so I’m concerned about tech companies stepping back from this responsibility. Hopefully they can find a balanced approach that respects differing views while still curbing the spread of verifiably false information.