Listen to the article
Rising AI Content and TikTok Disinformation Fuel Concerns Across Europe
TikTok is emerging as a significant source of misleading content in the European Union, according to a comprehensive new study that raises fresh questions about platform accountability in the digital age.
Research conducted by Science Feedback reveals that approximately one in four TikTok posts contain misleading elements, positioning the platform as a more prominent purveyor of disinformation than major competitors including Facebook, YouTube, and X (formerly Twitter).
The extensive analysis covered four European nations—France, Poland, Slovakia, and Spain—and evaluated content across multiple thematic areas. Health-related narratives dominated the landscape of misleading information, consistent with broader patterns observed across various digital ecosystems since the COVID-19 pandemic began.
“What we’re seeing isn’t isolated incidents but rather systematic issues embedded within platform structures,” said a researcher involved in the study who requested anonymity. “Disinformation has become a persistent feature rather than an occasional aberration.”
Particularly concerning is the growing prevalence of AI-generated content, especially in video formats. The study identified synthetic material accounting for a substantial portion of misleading posts, creating new challenges for content moderation. Despite existing policies on many platforms requiring disclosure of AI-generated material, researchers found most identified content lacked clear labeling or attribution.
“The technology to create convincing synthetic media has outpaced the safeguards,” explained digital policy expert Maria Kowalski at a recent Brussels forum. “When users can’t distinguish between authentic and artificially generated content, it fundamentally undermines trust in the entire information ecosystem.”
The findings come at a critical juncture as European regulators implement the Digital Services Act (DSA), a landmark legislation designed to make digital platforms more accountable for the content they host. While the DSA incorporates voluntary commitments from the EU’s disinformation code, it stops short of imposing mandatory requirements for identifying AI-generated material.
TikTok, owned by Chinese company ByteDance, has faced mounting scrutiny in Europe and North America over its content moderation practices. The platform has repeatedly stated its commitment to fighting misinformation, citing investments in content review teams and partnerships with fact-checking organizations.
“We take our responsibility to protect our community from misleading content extremely seriously,” a TikTok spokesperson told reporters in response to the study. “We’re constantly enhancing our systems to detect and remove harmful misinformation before it can gain traction.”
Industry observers note that TikTok’s algorithmically-driven recommendation system, which has propelled its rapid growth to over a billion users worldwide, may inadvertently amplify problematic content due to its emphasis on engagement metrics.
The European Commission has signaled increased attention to the issue, with Internal Market Commissioner Thierry Breton recently warning platforms about their responsibilities under the DSA. “The days of big online platforms behaving like they are ‘too big to care’ are coming to an end,” Breton stated in a press conference last month.
For European policymakers, the challenge remains striking an appropriate balance between protecting citizens from harmful content while preserving freedom of expression and innovation. The study’s findings are expected to inform ongoing policy discussions about platform transparency, algorithmic accountability, and the evolving responsibilities of digital platforms.
As AI technology continues to advance, with tools like ChatGPT and Midjourney making content generation increasingly accessible, distinguishing between authentic and synthetic material will likely become even more challenging in the coming years.
The debate extends beyond Europe’s borders, with similar concerns being raised in the United States, where presidential election campaigns are already contending with AI-generated content and targeted disinformation efforts.
Experts suggest that addressing these challenges will require a multifaceted approach involving regulatory frameworks, platform accountability measures, and digital literacy initiatives to help users navigate an increasingly complex information environment.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
The systematic issues with disinformation on TikTok are very concerning. As a fast-growing platform, it’s critical that TikTok and other social media companies take responsibility for addressing this challenge.
I’m curious to see how the EU will approach regulating AI-generated content on TikTok and other platforms. Balancing user privacy and free speech with mitigating harm will be a delicate task.
This is concerning, as the prevalence of AI-generated misinformation on TikTok could undermine public trust and lead to real-world harm. Robust content moderation and transparency are crucial to address this issue.
I agree, the EU needs to act quickly to establish clear regulations for AI-powered content on social media platforms. The potential for manipulation is worrying.
The prevalence of misleading health-related narratives on TikTok is especially worrying, as it can have serious consequences for public wellbeing. Rigorous fact-checking and content moderation are essential.
I agree, the spread of health misinformation on social media platforms like TikTok is a major threat that needs to be addressed urgently. The EU regulations need to be comprehensive and effective.
This underscores the challenges posed by AI-generated content and the need for robust platform accountability. The EU’s efforts to regulate this space will be closely watched globally.
Definitely. The TikTok disinformation issue highlights the broader need for greater transparency and responsibility from social media companies when it comes to algorithmic content curation.
The findings that 1 in 4 TikTok posts contain misleading elements is quite alarming. This highlights the urgent need for better content moderation and fact-checking on social media platforms.
Absolutely. TikTok’s role as a major source of disinformation is particularly troubling given its popularity among younger audiences. Effective regulation is crucial to protect users.