Listen to the article
TikTok Algorithm Shows Strong Bias Toward Far-Right Content During European Elections
Days before Poland’s recent presidential elections, an investigation by Global Witness revealed that TikTok’s algorithm showed new users five times more content supporting nationalist right candidate Karol Nawrocki than centrist candidate Rafal Trzaskowski. This finding follows similar patterns uncovered in Romania, where the platform’s algorithm served nearly three times more far-right content than other political material, and in Germany, where both TikTok and X recommendation systems exhibited comparable biases.
These investigations, conducted during a global election megacycle, highlight the persistent failure of major social media platforms to address extremist content online despite numerous studies emphasizing the importance of such action for election integrity. The findings directly contradict the popular far-right narrative claiming that social media platforms have an anti-conservative bias.
Engagement-driven algorithms, or recommendation systems, have been consistently identified as key drivers in the global spread of harmful content including health misinformation, political disinformation, and hate speech. YouTube’s algorithm, for instance, is responsible for approximately 700 million hours of daily watch time – roughly 70% of the platform’s total – significantly influencing viewers and potentially fueling radicalization and social division.
Internal documents from Meta have confirmed that its platforms’ fundamental mechanisms deliberately incentivize angry and polarizing content despite the company acknowledging the harmful effects. Similarly, TikTok’s algorithms have been implicated in actively promoting radicalization, polarization, and extremism.
“The algorithmic media ecosystem often prioritizes low-quality information like disinformation, misinformation, fake news, sensational stuff, rumors, hoaxes,” explained Eve Chiu, CEO of the Taiwan FactCheck Center, describing how algorithms consistently favor sensational and polarizing content during election cycles.
While conservatives have long argued that social media platforms discriminate against their viewpoints, evidence for this claim remains scarce. Earlier this year, Meta CEO Mark Zuckerberg announced the relocation of moderation and safety teams from California to Texas, widely perceived as an attempt to appease conservative critics concerned about content censorship.
However, research by Yale SOM’s Tauhid Zaman offers nuance to the debate. While accounts sharing conservative or pro-Trump hashtags were found to be suspended at significantly higher rates than liberal or pro-Biden accounts in 2020, the study also revealed that users associated with conservative content were more likely to share links from low-quality or misinformation-heavy sources, explaining the disproportionate suspensions.
The core issue appears to be how social media algorithms inherently elevate extremist narratives. These recommendation systems, whether on Facebook, X, TikTok, or YouTube, prioritize content that triggers strong emotional reactions – a model that may be profitable but ultimately undermines healthy democratic discourse. Citizens need exposure to diverse perspectives to form well-rounded opinions, yet algorithms often interpret provisional engagement signals as definitive preferences, potentially solidifying nascent opinions prematurely.
Regulatory conversations have increasingly focused on algorithmic recommendation engines as the central problem in harmful content dissemination. Meanwhile, platforms have strategically diverted attention away from their diminishing investments in transparency and content moderation, evading accountability for their algorithms’ harmful tendencies.
The self-regulatory approaches employed by major platforms over the past decade – including community guidelines, fact-checking partnerships, and content moderation – have proven insufficient in mitigating significant digital harms like online radicalization and violence incitement through hate speech. This has intensified global demands for external regulation, prompting the technology industry to deploy substantial resources in lobbying efforts across various jurisdictions.
A recent report by LobbyControl indicates a significant increase in tech sector lobbying expenditure within the European Union, with the five largest companies contributing substantially to this sum. This convergence of powerful lobbying capabilities and market dominance directly challenges democratic principles, as governments become increasingly dependent on private technology companies’ products and services.
For meaningful progress, the conversation must shift toward mandating transparency and accountability, particularly regarding internal recommendation engines currently designed to elevate harmful content under the guise of maximizing user engagement and profit. Without addressing these fundamental algorithmic biases, social media platforms will continue to undermine democratic discourse and potentially influence electoral outcomes across the globe.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools


8 Comments
Interesting findings on the potential bias of TikTok’s algorithm. It’s concerning to see such imbalances in content recommendations, especially during critical election periods. I wonder what steps these platforms are taking to address extremist content and ensure fairness.
The findings demonstrate the complex challenge of content moderation on social media. Platforms must strike a balance between free speech and curbing the spread of harmful, extremist narratives – especially during critical events like elections. Meaningful reform is long overdue.
The claims of anti-conservative bias seem unfounded based on these findings. The real issue appears to be the amplification of far-right and extremist content through engagement-driven recommendation systems. Social media platforms must do more to address this systemic problem.
This is a concerning trend that deserves more scrutiny. While the platforms may argue their algorithms are neutral, the real-world impacts on elections and democratic processes are anything but. Greater transparency and reform are clearly needed.
Absolutely. These platforms wield enormous influence, so they must be held accountable for the societal impacts of their design choices and recommendation systems.
This is an important issue that goes beyond just political bias. The amplification of misinformation and extremist content through algorithmic design poses a serious threat to public discourse and democratic processes. Robust regulation and independent oversight may be necessary.
This is a complex issue without easy solutions. Platforms must balance free speech with curbing the spread of harmful misinformation. Transparent, ethical algorithms that prioritize reliable information over engagement-driven content could be part of the answer.
Agreed. Greater platform accountability and independent oversight may be needed to prevent algorithmic bias and protect election integrity.