Listen to the article
Social media platforms are experiencing unprecedented levels of manipulation and misinformation, with artificial engagement reaching alarming rates according to multiple research findings. The problem appears particularly acute on X (formerly Twitter), where bot activity has surged dramatically over the past year.
A comprehensive study conducted by Queensland University of Technology researchers identified more than 1,300 bot accounts actively spreading disinformation on X during the 2023 Republican primary debates. This automated activity significantly exceeds the typical bot presence observed on other major platforms such as TikTok, Facebook, and Instagram, suggesting X has become a preferred platform for coordinated inauthentic behavior.
The scale of manipulation becomes even more evident during high-profile events. According to cybersecurity firm CHEQ, up to 76% of X’s traffic during the February Super Bowl originated from fake accounts. This finding raises serious concerns about the integrity of engagement metrics, potentially inflating advertising view statistics and misleading both users and advertisers about genuine participation levels.
Political discussions have emerged as a prime target for manipulation campaigns. Research from Cyabra revealed that when Elon Musk publicly positioned himself against Vice President Kamala Harris on the platform, more than 40% of the resulting activity came from inauthentic accounts rather than genuine user engagement. This pattern suggests coordinated efforts to amplify certain political narratives while creating an illusion of widespread public support.
Industry analysts point to a significant deterioration in platform integrity following Musk’s acquisition of Twitter in late 2022. Several policy changes have contributed to this decline, including the mass reinstatement of previously banned accounts, the monetization of verification status through the X Premium subscription model, and the systematic dismantling of misinformation safeguards previously established by the platform.
The cumulative effect of these changes has been dramatic. As of May 2024, bot activity on the platform has reportedly increased tenfold compared to pre-acquisition levels, according to social media analytics experts. This proliferation of automated accounts has fundamentally altered the information ecosystem on what remains one of the world’s most influential communication platforms.
Further complicating the situation is evidence suggesting potential algorithm manipulation beginning in July 2024. Research indicates unexplained deviations in engagement metrics that appear to systematically favor right-wing accounts and content. These algorithmic shifts notably coincided with Musk’s public endorsement of former President Donald Trump, raising questions about platform neutrality.
Social media researchers emphasize that this trend represents a broader challenge across digital platforms. While X appears to be experiencing the most severe issues, similar patterns of information manipulation are emerging across the social media landscape, though at varying degrees of intensity.
For everyday users, navigating this increasingly artificial information environment requires heightened vigilance. Media literacy experts recommend several protective strategies: critically evaluating sources before sharing content, maintaining awareness of potential algorithmic biases, cross-checking information across multiple reputable sources, and utilizing fact-checking tools to verify the authenticity of controversial claims.
Platform governance experts note that addressing these systemic issues will likely require a combination of regulatory pressure, technological solutions, and renewed commitment to platform integrity from social media companies themselves. Without coordinated intervention, the gap between artificial and authentic engagement may continue to widen, further undermining public discourse.
As social media platforms continue evolving, the challenge of distinguishing genuine public sentiment from artificially amplified messaging represents one of the most significant threats to informed democratic participation in the digital age.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


14 Comments
Bots and coordinated campaigns undermining the integrity of social media platforms is a major concern. Platforms must take decisive action to restore trust and transparency.
I share your concerns. Platforms have a duty to their users to ensure a level playing field and to prevent their services from being weaponized.
This is a concerning trend. Social media platforms need to do more to combat the rise of bot activity and misinformation. Accurate and authentic engagement metrics are crucial for users and advertisers alike.
I agree. Platforms must improve their detection and removal of fake accounts to preserve the integrity of their platforms.
The scale of manipulation on X is alarming. Bot activity surging during high-profile events like the Super Bowl suggests a serious integrity issue that needs to be addressed.
Absolutely. Platforms must be transparent about the extent of bot activity and take strong action to limit its impact on user engagement.
The high levels of bot activity and fake engagement during major events like the Super Bowl are really troubling. Platforms need to do more to identify and remove these manipulative accounts.
Absolutely. Accurate metrics are essential for both users and advertisers. Platforms must improve their detection and enforcement capabilities.
This research highlights the urgent need for social media platforms to address the growing problem of coordinated disinformation campaigns. Protecting the authenticity of online dialogue should be a top priority.
I couldn’t agree more. Platforms must be held accountable for allowing their services to be exploited in this way. Decisive action is required to restore trust and transparency.
This report highlights the urgent need for reforms to social media platform governance and content moderation. Protecting the authenticity of user engagement should be a top priority.
Definitely. Platforms must be held accountable for allowing their services to be exploited for the spread of disinformation and inauthentic activity.
Coordinated disinformation campaigns on social media are a major threat to informed discourse. Platforms need to invest heavily in tools and policies to detect and disrupt these networks.
Agreed. The integrity of online dialogue is at stake, and platforms have a responsibility to their users to combat these manipulative tactics.