Listen to the article
European Platforms Grapple with Rising Misinformation, New Study Reveals
A comprehensive new study has found alarming levels of misinformation across major social media platforms in Europe, with TikTok emerging as the most problematic platform for false and misleading content. The findings, released by Science Feedback and its research partners, mark the second wave of measurements across six Very Large Online Platforms (VLOPs) in four EU countries.
The research, conducted in October 2025 as part of the SIMODS project, examined content across TikTok, Facebook, YouTube, X/Twitter, Instagram, and LinkedIn in France, Spain, Poland, and Slovakia. By comparing results with an earlier measurement, researchers confirmed these issues represent structural problems rather than temporary fluctuations.
“The consistency of results across time confirms that the phenomena we are measuring are structural, not incidental,” the researchers noted. This reproducibility validates their methodology as a potential benchmark for regulatory compliance assessments under the Digital Services Act (DSA).
TikTok continues to show the highest prevalence of misinformation, with approximately 25% of exposure-weighted posts containing false or misleading information—up from about 20% in the previous measurement. YouTube also saw a concerning increase, rising from 8.5% to 12%.
Facebook (15%), X/Twitter (11%), and Instagram (8%) all showed significant levels of problematic content, while LinkedIn maintained the lowest rate at just 1%. Perhaps most concerning, three platforms—TikTok, X/Twitter, and YouTube—now contain more problematic content than credible content in the samples analyzed.
The study also confirmed the persistent “misinformation premium” across platforms, where low-credibility accounts receive disproportionately high engagement compared to reliable sources. On YouTube, low-credibility accounts now receive approximately 11 times more interactions per post than high-credibility accounts of comparable size, up from 8.5 times in the previous measurement.
X/Twitter showed the most dramatic deterioration, with its misinformation premium jumping from 4 times to 10 times. Facebook (9x), Instagram (4x), and TikTok (2x) all continue to amplify problematic content through their engagement metrics. LinkedIn remains the only platform where no significant premium was observed.
The monetization of misinformation continues to be a systemic problem. Where researchers could make inferences from available data, they found that 81% of eligible low-credibility channels on YouTube appear to be monetized, compared to 90% of high-credibility channels. On Facebook, while the gap is wider (22% vs. 51%), the fact that over one-fifth of low-credibility accounts benefit from advertising revenue suggests incomplete enforcement of content policies.
This wave of research introduced two new indicators: AI-generated misinformation and audience growth patterns. The findings reveal AI-generated false content has become a significant threat, especially on video platforms. Approximately 24% of misinformation on TikTok and 19% on YouTube contained AI-generated elements, with these posts accumulating an estimated 34 million views across platforms.
Alarmingly, over 83% of identified AI-generated misinformation carried no visible label, highlighting a major gap in platform transparency efforts. Health misinformation dominated this category, with researchers noting a disturbing trend of AI-generated doctor impersonations spreading false medical advice.
The study also tracked audience growth patterns, finding that on most platforms, low-credibility accounts grew at similar rates to high-credibility ones. X/Twitter proved the exception, where accounts spreading misinformation are growing their audiences at roughly 3.5 times the rate of reliable sources.
Health misinformation remains the largest category of false content (43%), followed by misinformation about the Russia-Ukraine war (23%) and national politics (12%).
The researchers highlighted that data access requests submitted under DSA Article 40.4 in January 2026 had received no response by the time of publication, underscoring ongoing challenges with platform transparency.
As the integration of the Code of Conduct on Disinformation into the DSA framework took effect in July 2025, these findings provide critical benchmarks for regulatory compliance. The researchers concluded that while their methodology is now ready to serve as a formal measurement tool, “what is now required is the political will to use them.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
This study underscores the need for a multi-faceted approach to address misinformation, involving platforms, regulators, fact-checkers, and the public. Coordinated efforts from all stakeholders will be essential.
The consistency of the results over time suggests these are structural problems, not just temporary glitches. Glad to see the research may provide a benchmark for regulatory compliance assessments under the DSA.
Agreed, having a reliable methodology to measure misinformation levels is crucial for effective platform regulation and accountability.
This study highlights the persistent challenge of misinformation online, even as platforms claim to be addressing the issue. Robust fact-checking and transparency measures seem essential to regain public trust.
While the results are concerning, I’m hopeful that the increased attention on this issue and the availability of robust research will spur platforms and policymakers to take more effective action.
Interesting findings on the prevalence of misinformation across major social media platforms in Europe. Curious to see how this compares to other regions and how platforms are working to address these issues.
The findings on TikTok are particularly worrying given the platform’s popularity, especially among younger users. Platforms must prioritize proactive detection and removal of false and misleading content.
Curious to learn more about the specific types of misinformation prevalent across these platforms and how they differ by region or content type. Understanding the nuances could inform more targeted solutions.
TikTok emerging as the most problematic platform for false and misleading content is concerning. Platforms need to take stronger action to identify and remove misinformation, especially on fast-growing social media.
While the findings are troubling, I’m encouraged that the researchers used a reproducible methodology that could serve as a benchmark for regulatory compliance. Consistent monitoring and accountability are key.
Agreed, establishing reliable metrics is a crucial first step towards meaningful platform regulation and accountability around misinformation.