Listen to the article
TikTok Boosts Election Integrity Efforts Amid European Voting Season
TikTok has released its sixth Transparency Report under the EU Code of Conduct on Disinformation, highlighting significant improvements in election integrity, AI content labeling, and fact-checking during the first half of 2025. The report, which now falls under the Digital Services Act framework, covers 30 European countries with consistent methodology to ensure comparable trend analysis.
The platform ramped up protection measures during elections in Croatia, Germany, Poland, Portugal, and Romania, with a particular focus on tackling fake engagement. Enhanced detection systems led to increased removals of impersonation accounts, including nearly 3,000 accounts falsely representing government officials, politicians, and political parties.
“We’re seeing faster enforcement during critical election periods,” said Caroline Greer, TikTok’s Director of Public Policy and Government Relations in Brussels. The platform achieved a notable 15 percentage point increase in “zero-view removals” of content violating civic and election integrity policies, meaning 90% of such content was taken down before users could see it.
Romania’s presidential election annulment in December 2024 prompted TikTok to implement additional safeguards, building on experience gained from over 200 global elections since 2020. The company established a Mission Control Centre, allocated extra resources across teams, and responded rapidly to emerging risks. These efforts coincided with expanded fact-checking initiatives that drove in-app election centre page views to more than 2 million, doubling previous figures.
The platform also participated in an election stress test organized by the European Commission ahead of Germany’s federal elections and partnered with European fact-checkers on media literacy campaigns throughout the region.
On the AI transparency front, TikTok expanded Content Credentials certification (C2PA) to additional features and introduced a visible watermark for AI content created within the app. These measures contributed to more consistent content labeling, with creator-labeled AI videos increasing by 36% to over 8.7 million, while automatically labeled AI-generated content grew 81% to approximately 5.5 million.
More transparent AI identification appears to correlate with improved compliance, as policy-violating AI content removals decreased by 53% to fewer than 25,000, with views on removed AI content falling by 47%.
Fact-checking operations scaled significantly during the election-heavy first half of 2025. The number of videos reviewed by third-party fact-checkers more than doubled to 13,000. Improved detection systems helped surface potentially problematic content, resulting in an 80% increase in removals following fact-check assessments and a 123% rise in content removed from the recommendation algorithm in users’ For You feeds.
While appeal volumes increased in line with enforcement actions, the success rates for those appeals remained relatively stable, suggesting the platform maintained consistent policy enforcement despite handling larger volumes.
TikTok continued efforts against inauthentic behavior, with fake follower removals returning to historical norms while interventions against fake engagement metrics increased. The company also reported ongoing disruption of covert influence operations, details of which appear in their monthly disclosure reports.
Educational initiatives targeting hate speech showed improved results. The platform’s Holocaust education campaign achieved greater reach and engagement, with video interventions generating more impressions and clicks. Search prompts became more targeted and saw higher click-through rates following a refreshed in-app educational hub launched in January in partnership with UNESCO and the World Jewish Congress.
Researcher access to platform data also expanded, with increased applications and approvals for both Research Tools and the Commercial Content Library. The transparency measures mandated by the EU Code of Conduct attracted growing interest, with more researchers visiting the Transparency Centre and downloading available data during this period.
“Our focus remains steady,” Greer emphasized. “Protect elections and civic debate. Make AI content clearer with labels and watermarking. Work closely with fact-checkers and researchers. And keep improving the speed and precision of enforcement at scale.”
The full report is available in the TikTok Transparency Centre, offering additional metrics and insights into the platform’s content moderation efforts across European markets.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


20 Comments
I hope these transparency efforts and policy changes set a positive example for other social media platforms to follow. Consistent industry-wide standards would be ideal.
Agreed. A coordinated approach across platforms could have an even greater impact on reducing the spread of harmful content.
Interesting to see the focus on European elections. I wonder how these efforts might translate to other regions and political contexts around the world.
That’s a good point. Scalable solutions that can be adapted to diverse electoral environments would be ideal.
Curious to see how TikTok’s approach compares to other major social media platforms. Are they leading the pack in terms of disinformation mitigation strategies?
Labeling impersonation accounts is a smart move. Helping users quickly identify fake accounts represents an important safeguard against deception.
Agreed. Clear labeling empowers people to make more informed decisions about the content they engage with.
Curious to see how TikTok’s efforts evolve over time. Continuous refinement and innovation will be key to staying ahead of bad actors.
Glad to see TikTok taking stronger action against disinformation during elections. Enhanced detection and faster removals of impersonation accounts are important steps to protect the integrity of the democratic process.
I agree. Transparency on these efforts is crucial for building public trust.
Enhanced detection systems seem to be making a real difference. Kudos to TikTok for continuously improving their tools to combat disinformation.
The Digital Services Act framework likely adds more structure and accountability to these transparency reports. Looking forward to seeing how TikTok’s efforts evolve under the new regulations.
The increased transparency is a welcome development. Sharing details on policy enforcement and platform improvements builds trust with users.
It’s encouraging to see TikTok taking a proactive stance on election integrity. Maintaining the health of the democratic process should be a top priority for all social media companies.
Absolutely. Protecting free and fair elections is fundamental to a healthy democracy.
Curious to see how the platform’s new systems performed during the various European elections. The consistency in methodology across countries should help provide meaningful trend analysis.
The increase in ‘zero-view removals’ is an impressive statistic. Proactively removing harmful content before users see it is key to limiting the spread of disinformation.
Indeed, catching and removing problematic content early is a more effective strategy than relying on user reports.
Tackling fake engagement is crucial. Bots and coordinated campaigns can distort the online discourse and undermine authentic participation.
Absolutely. Disrupting those manipulation tactics is vital for preserving the integrity of social media platforms.