Listen to the article

0:00
0:00

Vietnamese Network Spreads AI-Generated Political Misinformation Targeting Australia

From anti-transgender narratives about Olympic swimmers to fabricated political comments, a network of Facebook pages managed from Vietnam has been capitalizing on Australia’s increasingly polarized political climate to promote AI-generated content designed primarily for profit.

The operation began in mid-2025 when pages with seemingly innocent names like “Swimming Secrets” and “Tennis Triumph” appeared as fan accounts posting athlete updates. These pages, however, mixed legitimate sports content with falsehoods—including claims that Australian swimmer Mollie O’Callaghan would boycott future Olympics if transgender athletes were allowed to compete.

AFP’s investigation tracked more than a dozen of these sports and human-interest pages, which later pivoted to focus exclusively on Australian politics. The content combines actual news with fabrications, with some posts receiving thousands of shares. After being contacted by AFP, Meta removed 13 pages in March for violating site policies.

“The websites display almost industrial level forms of misinformation,” said open-source intelligence analyst Giano Libot. “It’s designed for the algorithm in search engines to pick up.”

Vietnam has emerged as a hub for such operations, where low labor and electricity costs have fostered a cottage industry of social media click farming. This latest network follows a pattern similar to one uncovered by AFP last year, when more than 30 baseball-themed pages operated from Vietnam were publishing false political claims ahead of the World Series.

“Often the purpose of disinformation is not to benefit a particular party, but to destabilize communities and create an era of distrust,” explained Jeannie Paterson, co-director of the University of Melbourne’s Centre of AI and Digital Ethics. “Australia is an ideal place at the moment for this sort of destabilization exercise.”

The network has particularly exploited recent tensions within Australia’s opposition coalition and the rise of Pauline Hanson’s far-right One Nation party. Among the most widespread claims was that Hanson had launched a $12 million lawsuit against Prime Minister Anthony Albanese’s Labor Party.

These identical posts appeared across multiple sports-themed Facebook accounts, linking to websites cluttered with advertisements and content in several languages, including Vietnamese. Analysis using multiple AI detection tools—including one co-developed by AFP—determined the articles were “likely machine-generated.”

Facebook transparency data revealed the pages were managed by administrators in Vietnam, despite listing contact details associated with American hotels and casinos—a clear attempt to disguise their origin.

A spokesperson for One Nation described the pages as “a clear case of foreign interference in domestic Australian politics.” Prime Minister Albanese’s office did not respond to requests for comment.

While Australia’s next federal election isn’t scheduled until 2028, state-level elections are approaching in Victoria this November and in New South Wales next year. Ika Trijsburg of the Australian National University warned that this onslaught of polarizing content “can sway electoral behavior” at the local level, “because it’s much less entrenched.”

Vietnam enacted legislation in March regulating artificial intelligence—becoming the first Southeast Asian country to do so. The law requires companies to clearly label AI-generated content and applies to developers, providers, and deployers of the technology, whether they are Vietnamese organizations or foreign entities operating in the country.

Despite these efforts, the flow of AI-generated misinformation continues. In mid-February, a new Facebook page called “AU News Today” emerged, publishing Australian political content that mirrored the previously identified network. A separate investigation by the Australian Associated Press uncovered a similar Vietnam-based network of accounts disguised as news outlets that remained active through March.

Shaanan Cohney, a cybersecurity expert at the University of Melbourne, described the situation as “a levelling-up of the skills in the disinformation world, which makes it a cat-and-mouse game.” He added: “Even if things were easy to detect before, it gets harder to bring down these networks.”

As AI technologies become more sophisticated and widely available, the challenge of combating politically targeted misinformation is likely to intensify, particularly in politically charged environments like Australia’s current landscape.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

11 Comments

  1. The sheer scale of this disinformation network is staggering. Thousands of shares for fabricated content – it’s clear these actors are well-resourced and coordinated. Robust media literacy efforts will be key to inoculating the public against such sophisticated tactics.

  2. Amelia Garcia on

    This is a troubling development. The use of AI-generated content to spread political misinformation is deeply concerning. We must remain vigilant and committed to fact-checking and media literacy to counter these sophisticated disinformation campaigns.

  3. William Martinez on

    This is a sobering reminder of the threats posed by AI-generated misinformation. The ability to manufacture fake content at scale is a real challenge for platforms and fact-checkers. Maintaining healthy skepticism and verifying sources is crucial.

    • Robert Hernandez on

      Agreed. The blurring of real and fabricated content is particularly insidious. Developing better technological and educational tools to combat this type of manipulation will be an ongoing battle.

  4. Linda Martin on

    The sheer scale and sophistication of these operations is alarming. Thousands of shares for fabricated content – it’s clear these actors are well-resourced and coordinated. Robust media literacy efforts will be key to inoculating the public against such manipulation.

    • Absolutely. The blending of real and fabricated content is particularly insidious. Developing better technological and educational tools to combat this type of disinformation will be an ongoing challenge.

  5. Lucas Hernandez on

    Disturbing to see how these networks are exploiting Australia’s polarized political climate. Spreading falsehoods under the guise of sports and human interest stories is a clever tactic. We must remain vigilant and committed to media literacy to counter such disinformation.

  6. Noah Martinez on

    This is a worrying trend – the weaponization of sports and human interest stories to push political narratives. We need to be vigilant about the origins and motivations behind online content, even if it appears benign at first glance.

    • Noah V. Garcia on

      Absolutely. The blending of real news with fabrications is a particularly insidious tactic that can be hard to spot. Critical thinking and fact-checking are crucial to avoid falling for this kind of manipulation.

  7. Olivia Martin on

    Interesting to see this type of disinformation network targeting Australian politics. It’s concerning how AI-generated content can be used to spread misinformation for profit. Fact-checking and media literacy are crucial to combat these tactics.

    • Robert Brown on

      I agree, the scale and sophistication of these operations is quite alarming. Vigilance is needed to identify and counter such coordinated disinformation campaigns.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.