Listen to the article
The social media platform X has emerged as the primary channel for disinformation campaigns targeting the European Union, according to a new report from the EU’s diplomatic service. The report reveals that a staggering 88% of analyzed disinformation content was distributed through X, formerly Twitter, which is owned by billionaire entrepreneur Elon Musk.
The European External Action Service (EEAS) detailed its findings in “Threats of Foreign Interference and Information Manipulation,” which examined approximately 43,000 pieces of content classified as disinformation in 2025. The platform’s overwhelming dominance in this space dwarfs other social media channels, with Telegram accounting for just 3% and Facebook for 2% of identified disinformation.
Analysts attribute X’s prominence in disinformation campaigns to several factors, including “the presence of networks of coordinated inauthentic behavior, the ease of creating fake accounts, and easier access to data.” The report also notes that most major social platforms restrict access to data that would help researchers assess the full scale of information manipulation activities.
Despite X’s central role, the EEAS found that sophisticated disinformation actors typically operate across multiple platforms simultaneously. These operations often combine traditional social media posts with messaging apps like WhatsApp and Telegram, creating an integrated approach that aims to infiltrate information ecosystems. This multi-platform strategy helps bad actors boost both visibility and perceived credibility while allowing them to target specific demographics with tailored messaging.
The report highlights a concerning trend in the growing use of artificial intelligence for disinformation activities targeting the EU, with a 259% increase compared to 2024. “Russian and Chinese actors have fully deployed AI tools to accelerate content production and increase interference activities with fewer resources,” according to the findings. European officials noted that AI technology is significantly reducing the cost of mounting these operations, making them more accessible and potentially more prevalent.
Politicians constitute the primary targets of these campaigns, representing 66% of all attacks. Ukrainian President Volodymyr Zelensky, French President Emmanuel Macron, German Chancellor Friedrich Merz, and European Commission President Ursula von der Leyen face particular scrutiny. Rather than focusing solely on the individuals themselves, these campaigns often attack “what an individual represents,” such as democratic values, and attempt to exploit their platforms to reach specific audiences.
The targeting patterns extend beyond individuals to organizations, with political entities leading at 36% of attacks, followed by media organizations (23%) and military or security institutions (22%). Electoral periods present particularly vulnerable moments for disinformation campaigns, as do periods of social unrest or protests, which can be exploited to “feed perceptions of chaos, fear, and disorder” against local authorities.
The EEAS cautions that their report should not be considered exhaustive, as monitoring activities do not cover all regions and languages, and likely capture only “a small portion” of total disinformation activities.
This report comes at a time of increasing regulatory scrutiny of social media platforms in Europe. Under the Digital Services Act, which came into full effect last year, very large online platforms like X face enhanced obligations to combat illegal content and address systemic risks, including disinformation that threatens democratic processes.
The findings also coincide with growing international concerns about X’s content moderation policies since Musk’s acquisition of the platform in 2022, which saw significant reductions in trust and safety staff and changes to verification systems that some researchers argue have contributed to the spread of misleading information.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


16 Comments
This is concerning, but not entirely surprising. The combination of loose moderation and the business model incentives on X have made it a breeding ground for all kinds of misinformation and manipulation.
Agreed. More transparency and accountability is needed from dominant social platforms to tackle these challenges.
Interesting report on the dominant role of X in spreading EU disinformation. I wonder what specific factors enabled this platform to become such a hub for these activities.
Yes, the report cites the ease of creating fake accounts and accessing data as key drivers. Platforms need to do more to address these issues and curb the spread of disinformation.
The dominance of X in this space is troubling, but not surprising given the platform’s lax approach to moderation and content governance. Stronger regulation may be needed to curb the spread of disinformation.
Agreed. Platforms can’t be allowed to put profits over the public good. Meaningful reforms are long overdue.
I’m curious to learn more about the specific tactics and networks being used to target the EU with disinformation. Understanding the modus operandi is key to developing robust countermeasures.
Good point. The report mentions coordinated inauthentic behavior, but more granular insights would be helpful to inform the policy response.
This report highlights the urgent need for a coordinated, multi-stakeholder effort to address the disinformation challenge. Platforms, governments, and civil society all have a role to play.
Well said. Collaborative solutions that balance free expression with responsible content moderation are essential.
It’s disheartening to see how much disinformation is being spread, often with the help of automated accounts and coordinated campaigns. We need to find ways to combat this scourge more effectively.
Absolutely. Improving data access for researchers and tightening platform policies could be important steps in the right direction.
The predominance of X in spreading EU disinformation is deeply concerning. This platform’s business model and lack of meaningful content moderation have enabled the proliferation of manipulative and harmful content.
Completely agree. Fundamental changes to platform governance and incentive structures are needed to curb these toxic trends.
It’s troubling to see the extent to which X has become a hub for disinformation targeting the EU. This underscores the urgent need for greater transparency, accountability, and effective regulation of social media platforms.
Well said. Policymakers must act decisively to address these systemic challenges and protect the integrity of our information ecosystem.