Listen to the article

0:00
0:00

Poland’s request to the European Commission over AI-generated TikTok videos promoting an EU exit has reignited scrutiny under the Digital Services Act (DSA), potentially leading to significant regulatory consequences for the platform.

The Polish government has formally asked European regulators to investigate TikTok after discovering a series of AI-generated videos featuring young women urging Poland to leave the European Union. While TikTok removed the account following media attention, Polish officials are pushing for a formal investigation under the DSA, which could result in penalties of up to 6% of TikTok’s global annual revenue.

This case represents a critical test of how the European Union will apply its systemic risk regulations to viral short-form video content. The investigation would examine TikTok’s content moderation practices, recommendation algorithms, and measures to identify and label AI-generated content—particularly content with political implications.

“This incident demonstrates the emerging challenges platforms face with synthetic media,” said a spokesperson from Europe’s digital policy department. “The Commission has clear expectations for very large platforms to assess and mitigate risks associated with generative AI, especially around elections and political discourse.”

For TikTok, the stakes extend beyond potential financial penalties. A formal investigation could damage the platform’s reputation, force costly changes to its content moderation systems, and potentially restrict how its algorithm promotes certain types of content. These outcomes could significantly impact TikTok’s growth trajectory and advertiser confidence in the platform.

Although ByteDance, TikTok’s parent company, remains privately held, the ramifications of stricter European regulation would likely affect the broader social media ecosystem. Investors with exposure to global tech or social media companies should monitor developments closely, as similar platforms may face increased scrutiny or need to implement comparable safeguards.

Industry analysts suggest that any enforcement action could trigger higher trust and safety spending across the sector. “If the Commission mandates specific controls for AI content or recommendation systems, we’ll see those costs reflected in operating margins for all major platforms,” noted one Singapore-based tech analyst.

The potential investigation comes amid growing global concern about AI-generated misinformation. The videos in question reportedly used realistic-looking AI avatars to deliver persuasive anti-EU messages, demonstrating how synthetic media can be deployed rapidly and at scale to influence public opinion.

In response, major platforms have begun implementing various safeguards, including AI content labels, watermarking, and enhanced detection models. However, regulators increasingly question whether these measures are sufficient, transparent, and consistently applied.

For advertisers, particularly in markets like Singapore where brand safety is paramount, the situation highlights ongoing concerns about content adjacency. If TikTok faces regulatory action, advertisers may shift budgets toward platforms with more robust content verification systems and transparent reporting.

The European Commission’s approach to this case will signal its broader strategy for governing generative AI content. Observers should watch for formal information requests, potential interim measures, or a statement of objections if the investigation proceeds.

While regulatory frameworks differ globally, stricter EU actions typically influence platform policies worldwide. Singapore’s own approach to online safety through laws like the Protection from Online Falsehoods and Manipulation Act (POFMA) may see complementary enforcement if global platforms upgrade their content moderation systems in response to European pressure.

As this case develops, both investors and advertisers would be wise to monitor compliance costs, increased demand for third-party verification services, and platform transparency reports that could signal shifting operational priorities in response to regulatory pressure.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.