Listen to the article
The growing menace of disinformation has emerged as perhaps the most significant threat to democratic governance worldwide, according to the UN’s 2024 Global Risk Report. With over 60 general elections held globally in 2024, the deliberate spread of falsehoods has undermined electoral integrity at an unprecedented scale.
The fundamental problem is structural: today’s information ecosystem prioritizes engagement over accuracy. Social media platforms have designed algorithms that promote provocative or alarming content to maximize user engagement and, by extension, profit. This design choice has accelerated societal polarization and reinforced echo chambers where misinformation thrives unchallenged.
The statistics are alarming. False political news travels 70 percent faster than factual reporting, and fabricated stories receive six times more engagement than verified information. This imbalance has been dramatically worsened by the rapid evolution of artificial intelligence technologies, which have made the creation of deepfakes and other synthetic media increasingly sophisticated and accessible.
“Pink slime” publications—AI-generated websites masquerading as legitimate local news sources—are proliferating at an alarming rate. Research suggests that more than half of what appears to be regional digital news coverage may actually be algorithmically generated content designed to mislead readers.
The geopolitical implications are profound. Nations like Russia have weaponized disinformation as a tool for undermining rival democracies. Earlier this year, the United States imposed sanctions on Moscow’s Centre for Geopolitical Expertise for deploying artificial intelligence to rapidly disseminate false narratives and for creating manipulated video content targeting a U.S. vice-presidential candidate. This organization reportedly has connections to the GRU, Russia’s military intelligence service.
Industry responses have sometimes exacerbated the problem. Meta’s January announcement that it would replace its third-party fact-checking program in the United States with a crowd-sourced system raised serious concerns among information security experts. Critics argue this approach could empower organized groups to shape narratives in their favor, further emboldening bad actors who profit from exposure-based monetization models and inadequate content moderation.
Given that many disinformation campaigns receive state sponsorship, effective responses must be both systemic and global. Experts recommend that governments enforce digital safety and privacy requirements for technology companies. An international regulatory framework modeled after the European Union’s Digital Services Act (DSA) could establish meaningful accountability, with substantial financial penalties for non-compliance.
Under the DSA, very large online platforms face potential fines of up to six percent of their annual global revenue for breaching obligations—a significant deterrent given the profit-driven business models that currently dominate the industry.
Combating foreign interference requires dismantling the infrastructure that enables it. This includes cutting off funding pathways, particularly cryptocurrency-based money laundering operations that finance disinformation campaigns. Additionally, platforms frequently used for covert coordination, such as Telegram and Yandex, require greater regulatory scrutiny.
Since disinformation is engineered specifically for psychological manipulation, building psychological resilience is critical. Digital literacy and critical thinking programs represent essential defensive measures. One particularly effective approach is “pre-bunking”—a strategy rooted in psychological inoculation theory that involves warning people about common manipulative tactics before they encounter false narratives.
Contrary to industry objections, these protective measures don’t constitute censorship. Rather, they empower individuals to recognize manipulation techniques like fearmongering and scapegoating, strengthening their ability to evaluate information critically.
Addressing the deliberate degradation of factual discourse requires coordinated effort from governments, private sector entities, and citizens. The false dichotomy that positions regulation of digital harms as inherently opposed to free expression has paralyzed meaningful policy responses for too long.
The preservation of democratic systems now depends on our collective ability to recognize and counteract disinformation so that citizens can make truly informed choices. The urgent imperative is clear: we must demand global accountability from the systems and platforms that facilitate the spread of falsehoods.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


16 Comments
This is a complex issue with no easy answers. But I’m hopeful that increased awareness and a collaborative, multi-stakeholder approach can help turn the tide against disinformation.
Agreed. Tackling this challenge will require sustained effort and creativity from policymakers, tech leaders, and citizens alike.
I’m skeptical that social media platforms will voluntarily make the necessary changes to address this issue. Stronger regulatory oversight may be required to force meaningful action.
That’s a fair concern. Policymakers will likely need to play a more active role in shaping the rules of the digital information landscape.
This is a global problem that requires a coordinated, international response. I hope the UN and other multilateral bodies can develop effective frameworks for combating disinformation worldwide.
Absolutely. Transnational cooperation will be essential for tackling this challenge at scale.
I’m curious to learn more about the specific policy recommendations from the UN report. What kinds of interventions are being proposed to address this crisis?
That’s a great question. The report likely outlines a range of potential solutions that governments and tech companies could explore.
The growing power of AI-generated content is particularly worrying. We’ll need robust fact-checking and authentication measures to stay ahead of these evolving threats.
Absolutely. Proactive steps to address this challenge are critical for preserving the integrity of elections and public discourse.
This is a concerning trend for democracy. We need to find ways to combat the spread of disinformation and restore trust in credible information sources.
Agreed. Regulating social media platforms and improving media literacy could be part of the solution.
The statistics on the rapid spread of false news are truly alarming. We need to find ways to incentivize the production and distribution of high-quality, fact-based journalism.
Excellent point. Supporting a healthy, independent media ecosystem should be a top priority in the fight against disinformation.
While the growth of disinformation is deeply worrying, I’m encouraged to see increased awareness and a sense of urgency around this issue. There’s still hope that we can turn the tide.
That’s a positive outlook. If we can harness the collective will and resources to address this threat, we may be able to safeguard the integrity of our democratic institutions.