Listen to the article
AI-Generated Disinformation Emerges as Key Concern in Labor’s Election Victory Review
More robust measures are needed to combat artificial intelligence-generated disinformation, according to a comprehensive review of the Australian Labor Party’s landslide federal election victory. The report, released Friday by Labor national president Wayne Swan, highlights both the party’s campaign strengths and emerging threats to electoral integrity.
The four-person review panel found that AI was actively deployed by Labor’s political opponents during the 2025 campaign to spread misleading information. This included unlabelled AI-generated videos and images used in posters and online content.
“International evidence shows that AI fundamentally lowers the cost and increases the scale of disinformation operations,” the review stated. “Increasingly, people cannot tell authentic from manufactured information.”
The panel identified a concerning evolution in disinformation tactics. Rather than creating obviously viral AI content, campaigners are now producing thousands of slightly different versions of the same message, making such content significantly harder to detect and counter. The review warned that the Australian Electoral Commission currently possesses only limited powers to address computer-generated disinformation, suggesting a potential regulatory gap that could impact future elections.
While praising the campaign that delivered Prime Minister Anthony Albanese 94 seats in the House of Representatives—significantly damaging both the Liberal Party and the Greens—the review emphasized that the opposition’s missteps contributed substantially to Labor’s success.
“Post-election research showed voters felt that the coalition was out of touch, ran a poor campaign and did not offer meaningful solutions to Australia’s key challenges,” the document noted. Voters expressed particular concern about the cost and extended timelines associated with the coalition’s nuclear energy proposal. They also responded negatively to then-opposition leader Peter Dutton’s promise to eliminate working from home rights for public servants—a policy eventually abandoned during the campaign.
“Rather than a retrospective referendum, Labor turned the election into a contest over which party would make Australians better off in three years’ time,” the reviewers observed. “This put the onus on the Liberal Party to detail their plans and present Australians with a viable alternative, and they failed this test.”
The review was compiled by former Victorian Labor secretary Chris Ford, Australian Services Union secretary Emeline Gaske, former WA Labor official Lenda Oshalem, and strategic advisor Moksha Watts. Their analysis emphasized that Labor must fulfill its campaign promises over the next two years to establish a solid foundation for its next election bid.
“Delivery is not optional; it is the cornerstone of Labor’s agenda and will shape our bid for a third term,” the panel stressed, underscoring the importance of policy implementation ahead of the next electoral cycle.
The report also flagged concerns about the increasing prevalence of three-cornered electoral contests, where government, opposition, and independent or third-party candidates all have realistic chances of winning seats. This trend could reshape campaign dynamics in previously secure electorates.
“There is no such thing as a safe seat and campaigns must be attuned to the likelihood of three-cornered contests regardless of where they currently sit on the 2025 post-election pendulum,” the review cautioned. It specifically warned that Labor needs to remain vigilant against independent challengers who can leverage local issues to mount effective campaigns.
Meanwhile, the Liberal Party has also conducted its own campaign review, though its release has reportedly been delayed due to concerns raised by Mr. Dutton that portions of the document may contain potentially defamatory material regarding him and his former staff.
The review ultimately positions AI-generated disinformation as a growing challenge for democratic processes in Australia, suggesting that political parties and electoral authorities must develop more sophisticated approaches to protect election integrity in an era of rapidly advancing technology.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
The use of AI to spread disinformation is a concerning development that warrants serious attention. Robust safeguards and transparency measures are essential to protect the democratic process.
I agree. The ability to rapidly generate large volumes of subtly different messages is particularly troubling. Effective detection and response strategies will be key.
It’s alarming to see how AI is being weaponized to spread misleading information during elections. Combating this will require a multi-pronged approach involving technology, regulation, and public awareness.
Absolutely. Voters need to be empowered to critically evaluate online content and identify potential AI-generated disinformation. Education will be crucial in this effort.
This is a timely and important issue. Maintaining the integrity of our electoral process in the face of AI-powered disinformation should be a top priority for policymakers and tech companies.
Absolutely. The potential impact on public trust and democratic institutions is severe. Coordinated, multi-stakeholder efforts will be crucial to address this emerging threat.
The report highlights a concerning evolution in disinformation tactics. Leveraging AI to produce large volumes of subtly different messages is a worrying development that will require innovative solutions.
Agreed. The sheer scale and sophistication of these operations pose significant challenges. Ongoing monitoring and rapid response capabilities will be essential.
This is a concerning trend that requires urgent attention. AI-powered disinformation is a real threat to the integrity of our elections. Robust safeguards and transparency measures are crucial to combat this challenge.
I agree, the ability to rapidly generate large volumes of subtly different disinformation is particularly troubling. Effective detection and response strategies will be key.
This report highlights the growing challenge of AI-powered disinformation in elections. Addressing this issue will require a comprehensive approach involving technology, regulation, and public awareness.
Absolutely. Empowering voters to critically evaluate online content and identify potential AI-generated disinformation will be a crucial part of the solution.