Listen to the article
Study Finds X Leads Major Social Networks in Disinformation Content
X, the social media platform formerly known as Twitter, has the highest proportion of disinformation among six major social networks, according to a new European Commission study. The research, which examined over 6,000 unique social media posts across Facebook, Instagram, LinkedIn, TikTok, X, and YouTube, raises fresh concerns about content moderation on the Elon Musk-owned platform.
The comprehensive analysis focused on three European countries considered particularly vulnerable to disinformation campaigns: Spain, Poland, and Slovakia. These nations were selected due to recent elections or their proximity to the war in Ukraine, factors that make them prime targets for misleading content.
“My message for X is: you have to comply with the hard law. We’ll be watching what you’re doing,” warned Vera Jourova, the EU’s Values and Transparency Commissioner, in response to the findings.
The study, conducted by disinformation monitoring startup TrustLab, measured what researchers called the “ratio of discoverability” of disinformation – essentially the proportion of sensitive content that contains false or misleading information. X ranked highest in this metric, while YouTube demonstrated the lowest ratio among the platforms examined.
This development is particularly notable given X’s history with European regulatory frameworks. The platform, then called Twitter, was among many social networks that voluntarily joined the EU’s code of practice on disinformation in 2018. However, under Musk’s leadership, the company withdrew from this voluntary agreement.
Despite withdrawing from the voluntary code, X remains subject to the EU’s Digital Services Act (DSA), which establishes regulatory standards for large technology platforms. The EU plans to transform the voluntary code into a mandatory code of conduct under the DSA.
“Mr. Musk knows that he is not off the hook by leaving the code of practice, because now we have the Digital Services Act fully enforced,” Jourova emphasized. The stakes are significant, as non-compliance with the DSA could result in fines of up to six percent of a company’s global turnover.
The timing of this study coincides with growing concerns about state-sponsored disinformation campaigns in Europe. In September, the EU accused social media companies of failing to effectively counter “large-scale” Russian disinformation operations following the invasion of Ukraine, noting that the “reach and influence of Kremlin-backed accounts” had actually expanded in 2023.
“The Russian state has engaged in the war of ideas to pollute our information space with half-truth and lies to create a false image that democracy is no better than autocracy,” Jourova stated. She described Russia’s efforts as “a multi-million euro weapon of mass manipulation” targeting European citizens, emphasizing that major social media platforms must address these risks.
The threat posed by disinformation is considered especially serious in the current geopolitical context, with the ongoing conflict in Ukraine and upcoming European elections creating an environment where misleading content can have significant consequences.
Adding to these concerns, Commissioner Jourova highlighted that work is underway to address AI-generated disinformation ahead of European elections. She mentioned plans to meet with representatives from OpenAI to discuss these issues, signaling that regulators are increasingly focused on both human-created and AI-generated misleading content.
The BBC has approached X for comment on the study’s findings, but as of publication time, the company has not responded to the allegations regarding disinformation on its platform.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools
11 Comments
The EU’s findings are a stark reminder of the damage that disinformation can cause. Platforms like X must prioritize fact-checking, content moderation, and media literacy efforts to protect their users.
This report highlights the need for greater transparency and accountability from social media platforms. The public deserves to know how these companies are addressing disinformation and protecting users.
This report highlights the ongoing challenges in addressing disinformation on social media platforms. Strict content moderation and enforcement of platform rules will be critical to curbing the spread of misleading information.
This report underscores the importance of media literacy and critical thinking when consuming information online. Users need to be vigilant about verifying sources and fact-checking claims, regardless of the platform.
Absolutely. Empowering people to identify disinformation is crucial, as platforms alone cannot fully control the spread of misleading content.
While the report focuses on X, the issue of disinformation is endemic across social media. A multi-pronged approach involving regulation, user education, and platform reforms is needed to address this challenge.
The EU’s findings are concerning, but not entirely surprising. Platforms like X have struggled to effectively police the spread of false and inflammatory content. Stronger regulation and accountability measures are clearly needed.
I agree. Platforms must be held responsible for the content they host and the impact it can have, especially around sensitive political issues.
While it’s troubling to see X leading in disinformation, I’m not surprised. The platform’s lax moderation policies have long been a concern. Stronger regulation and enforcement are needed to protect the public.
I hope the EU’s warning to X will prompt real action. Disinformation can have serious real-world impacts, and platforms must take greater responsibility for the content they amplify.
The EU’s findings are a wake-up call for X and other social media companies. They must do more to curb the spread of false and misleading information, or face serious consequences.