Listen to the article
Foreign Actors Manipulating Political Discourse on X, Transparency Features Fall Short
A new transparency feature on social media platform X has confirmed what many experts long suspected: numerous influential political accounts are operated by foreign actors attempting to influence American public discourse. This revelation adds to growing evidence of external forces working to amplify division and spread disinformation throughout the United States.
According to Imran Ahmed, founder and CEO of the Center for Countering Digital Hate (CCDH), this discovery highlights a systemic problem with major social media platforms. In a recent interview with KPBS reporter Amita Sharma, Ahmed described how these platforms “distort the lens through which we see the world,” creating false impressions about public sentiment.
Ahmed cited research surrounding the reaction to Charlie Kirk’s assassination as an example. “Only 1.6% — less than 1 in 50 comments — were people either calling for retaliatory violence or celebrating his death,” he explained. “Which means that 98% — the vast majority of comments — were people just saying, ‘This is horrible. I don’t want this to happen to anyone.'” Yet the algorithms amplified the most inflammatory content, creating a skewed perception of public reaction.
X’s new transparency feature displays an account’s location, username change history, and join date. However, Ahmed dismissed this as “a simulation of transparency” rather than meaningful disclosure. “They’ve decided what they’re going to make transparent, and there’s no way of auditing the effectiveness,” he noted. Shortly after launch, X acknowledged limitations in the feature’s accuracy, undermining its credibility.
What the feature did reveal, according to Ahmed, is that platform owner Elon Musk has “utterly failed” in his promise to eliminate automated bot accounts and foreign interference operations since acquiring the platform.
The issue extends beyond X. Ahmed cited a CCDH study showing Instagram failed to enforce its own rules against abusive content targeting women politicians, with over 90% of reported harassment going unaddressed. The most targeted figure was Republican Congresswoman Marjorie Taylor Greene, who recently cited online abuse as a factor in her decision to leave politics.
“Since then, Instagram has made no real progress in enhancing its ability to both detect and deal with real hate,” Ahmed stated. He pointed to Meta CEO Mark Zuckerberg’s decision to reduce enforcement resources for community standards, resulting in what Ahmed describes as a degradation of safety on the platform.
The CCDH has achieved some regulatory success internationally. Ahmed noted they’ve helped establish statutory transparency requirements for social media companies in the UK and EU, creating accountability mechanisms when platforms cause societal harm. One example he cited was X’s role in amplifying false information that contributed to race riots in the UK by spreading an incorrect claim about the identity of an attacker.
In the United States, Ahmed emphasized the need to reform Section 230 protections that shield platforms from liability. “Social media companies have a special get-out-of-jail free card when it comes to causing harm,” he explained. “Any business that harms a consumer, that consumer can take legal action against them. You can’t sue a social media company if they harm you or your kids.”
Without access to the raw data behind these platforms’ algorithms, Ahmed recommends a simple approach for average citizens: spend less time on social media. “Put down your phone and go and talk to your other citizens,” he advised. “Social media is inherently, by design, distortive. It seeks to present a world that is completely different to the real world… to keep you addicted, to keep you scared.”
As foreign influence operations continue exploiting social media vulnerabilities, pressure for meaningful transparency and accountability measures continues to build on both sides of the political spectrum.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


16 Comments
The findings about foreign influence operations are quite concerning. While transparency is important, I wonder if more proactive content moderation could also help limit the spread of disinformation.
That’s a good point. A balance between transparency and effective content moderation may be needed to address the complex challenge of online hate and disinformation.
The insights about the Charlie Kirk example are quite eye-opening. I’d be curious to learn more about the research methodology and broader implications for understanding public sentiment online.
Yes, that’s a fascinating data point. Understanding the true nature of online discourse is crucial for developing effective solutions to combat hate and disinformation.
This is a concerning issue that requires robust solutions. Transparency is key, but platforms must also address the underlying incentives driving the spread of hate and disinformation.
Agree, we need a multi-pronged approach to tackle this complex problem. Empowering users with more information is a good start, but platforms need to do more.
The Charlie Kirk example highlights the importance of understanding the true nature of online sentiment. I wonder if machine learning and other technologies could be leveraged to improve the accuracy of such analyses.
That’s a good point. Advancements in AI and data analysis could potentially provide more granular insights to help combat disinformation and extremism online.
This is an important issue that deserves serious attention. While transparency is a good start, I wonder if there are other regulatory or policy changes that could help address the problem more comprehensively.
Agreed. Policymakers and tech leaders will need to work together to find a balanced approach that respects free speech while effectively countering malicious actors.
Interesting to see how foreign actors are manipulating online discourse. While transparency is important, I wonder what other measures could help mitigate the impact of this disinformation.
Stronger content moderation, algorithmic adjustments, and user education may all have a role to play in addressing this challenge. Tackling the root causes is crucial.
Interesting insights on the impact of foreign actors in shaping online discourse. Transparency is a critical first step, but I’d be curious to learn more about potential regulatory or policy solutions as well.
Agreed. Policymakers will likely need to play a role in developing a comprehensive framework to combat these issues while respecting free speech principles.
Tackling online disinformation is a complex challenge, but increased transparency is a step in the right direction. I’m curious to see what other strategies the CCDH and others are pursuing.
Yes, it will likely take a multi-faceted approach to make a meaningful impact. I’m hopeful that continued research and dialogue can uncover more effective solutions.