Listen to the article

0:00
0:00

AI Disinformation Campaign Targets Taiwan’s 2024 Presidential Election

Taiwan’s recent presidential election became a testing ground for sophisticated AI-generated disinformation, according to an analysis by Dr. Austin Horng-En Wang, Associate Political Scientist at RAND Corporation and Associate Professor at the University of Nevada, Las Vegas. The election faced an unprecedented wave of deepfake videos, fake social media accounts, and AI-powered news anchors that spread false narratives in what appears to be a coordinated campaign.

One of the most alarming examples occurred on January 9, just days before the election, when a manipulated video of Democratic Progressive Party (DPP) candidate Lai Ching-te went viral across multiple platforms. The deepfake altered Lai’s original campaign video titled “On the Road” by using synthesized lip movements and voices to make it appear as though Lai was admitting to having an illegitimate child and expressing concern about a potential sex scandal.

The altered video spread rapidly across Facebook, X (formerly Twitter), TikTok, and the messaging app Line. Dr. Wang’s investigation revealed that accounts sharing the manipulated content also frequently distributed articles and videos from Chinese state media. Notably, the altered videos contained numerous simplified Chinese characters, suggesting possible links to Chinese actors.

While Dr. Wang’s research indicates the deepfake first appeared in November 2023, it wasn’t until early January, approximately two weeks before the election, that it was massively distributed. The timing appears strategic, designed to cause maximum disruption during the critical final phase of the campaign.

The disinformation effort employed multiple AI-powered tactics beyond just deepfake videos. Dr. Wang discovered numerous AI-generated Facebook accounts played a crucial role in distributing the altered videos across Taiwan. These accounts typically featured profile pictures created by image generation or face-swapping software and were linked to Facebook groups promoting pro-China narratives.

“The usage of AI-generated profile photos not only reduced operational costs but also helped these accounts avoid scrutiny,” Dr. Wang notes in his analysis. “As these profile pictures resemble common features associated with local residents, the content distributed by these accounts appears more authentic.”

Additionally, more than 20 new YouTube channels emerged before the election featuring virtual anchors with AI-generated voices. These synthetic presenters read scripts containing text from Chinese state media articles critical of DPP politicians, making the information seem more credible and easier to digest for viewers who prefer audio content.

According to Dr. Wang’s analysis, while the disinformation campaign failed to significantly sway the overall election outcome, it wasn’t without impact. The mass distribution of the altered video resulted in a spike in public interest in the illegitimate child rumor and distracted attention from substantive policy discussions and polling data. His research showed that supporters of Lai’s opponents rated him more negatively after the mass distribution of the altered video.

Looking ahead, Dr. Wang warns that such incidents are likely to become more frequent and widespread, posing significant threats to upcoming elections globally. He emphasizes that when information operations originate from foreign actors, they often cannot be regulated through domestic legal frameworks, creating substantial challenges for democratic discourse.

To combat these threats, Dr. Wang recommends several approaches. First, social media platforms must become more transparent and release data that helps users evaluate account trustworthiness, potentially following EU policies on advertising databases and administrator geolocation. Second, traditional media can restore its credibility as reliable information sources amid the flood of AI-generated content. Finally, social media companies should proactively disclose information on foreign manipulation rather than simply deleting reported content.

This case study from Taiwan offers critical lessons for democracies worldwide as they prepare to face similar AI-facilitated disinformation campaigns. With major elections scheduled in numerous countries this year, including the United States, understanding these tactics and developing effective countermeasures has become increasingly urgent for preserving electoral integrity in the digital age.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.