Listen to the article
The Rising Tide of Disinformation: How AI and Foreign Actors Are Shaping Digital News
Fake news has been around for generations, but the digital age has dramatically accelerated its spread and impact. As Americans prepare for the November 2024 presidential election, the threat of disinformation looms larger than ever, with artificial intelligence adding a troubling new dimension to the problem.
During this election cycle, social media platforms have become battlegrounds where AI-generated content has proliferated. False images of political figures carrying manipulated messages have spread widely, while spam accounts flood Facebook feeds with seemingly random content designed purely for engagement. On X (formerly Twitter), fake accounts using stolen images and deepfakes push preferred presidential candidates.
National security officials have identified a more sinister trend: increasing efforts at election interference using AI-generated content from China, Iran, and Russia. Iran stands accused of making threats against former President Donald Trump and spreading disinformation about his campaign. Meanwhile, the Biden administration has seized Kremlin-run websites allegedly designed to influence the U.S. presidential election with false information.
An FBI affidavit refers to these Russian campaigns as “Doppelganger” operations. In response, the Justice Department has unveiled criminal charges against Russian nationals, imposed sanctions on various entities, and seized numerous internet domains. These actions aim to halt the flow of disinformation to American voters and protect U.S. politicians from digital attacks.
This isn’t Russia’s first foray into social media manipulation. The country has built a comprehensive digital barricade that prevents its citizens from accessing information about its war with Ukraine. Cut off from global information sources, Russians must rely on state-approved content, including false narratives that portray Ukraine as the aggressor in the conflict.
The problem extends beyond politics. After Hurricane Helene, the Red Cross had to address rumors that the federal government was deliberately withholding aid to victims. This misinformation has hampered relief efforts, with people becoming reluctant to donate money and supplies based on false information.
Looking ahead, Meta’s announcement that it will eliminate fact-checkers in 2025 and replace them with user-generated “community notes” raises concerns about potentially increased harmful content on both Facebook and Instagram.
Understanding Fake News
Fake news consists of deliberately false articles designed to manipulate readers’ perceptions. While mimicking legitimate news formats, these stories lack credibility and accuracy. Warning signs include unverifiable information, non-expert authors, content not corroborated by other reliable sources, and emotionally manipulative messaging rather than fact-based reporting.
Fake news takes various forms, including clickbait with exaggerated headlines, propaganda designed to harm institutions or groups, imposter content mimicking legitimate news sites, biased reporting that confirms existing beliefs, satirical content that may be mistaken as factual, state-sponsored disinformation, and articles with misleading headlines that distort the actual content.
The harm caused by fake news extends beyond simple misunderstandings. It intensifies social conflict, creates unnecessary controversy, and breeds mistrust in legitimate institutions and media sources.
How Disinformation Spreads
Several factors contribute to the rapid dissemination of disinformation. Social media’s frictionless sharing features allow content to reach exponentially larger audiences with each share. Platform algorithms recommend content based on past preferences and search history, potentially creating filter bubbles. Engagement metrics that prioritize shares and likes over accuracy further amplify sensational content.
Artificial intelligence has dramatically changed the disinformation landscape. AI systems can create realistic fake material tailored to specific audiences, generate messages that test effectively at swaying target demographics, and deploy bots that impersonate human users to spread false information.
Hackers represent another threat vector, occasionally breaching legitimate news sites to plant false stories. Ukrainian officials have reported incidents where government websites were compromised to post fake news about peace treaties. Additionally, paid trolls frequently populate comment sections of reputable articles to sow discord and spread falsehoods.
Misinformation vs. Disinformation: Understanding the Difference
While often used interchangeably, misinformation and disinformation have distinct meanings. Misinformation involves inaccurate information shared without malicious intent—often spread unknowingly by people who believe it to be true. Disinformation, by contrast, is deliberately deceptive, typically created and shared with specific objectives, such as the Russian government’s campaigns to gain public support for its actions in Ukraine.
With these threats growing more sophisticated, digital literacy has become essential. Experts recommend scrutinizing sources, checking author credentials, verifying images, reading beyond headlines, maintaining a critical mindset, recognizing satire, being wary of sponsored content, using fact-checking resources, and watching for signs of AI-generated fakes like unnatural hand positions or inconsistent facial expressions.
As disinformation technologies advance, the battle for truth in our digital information ecosystem becomes increasingly challenging—with potentially profound implications for democracy, public safety, and social cohesion.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
Interesting to see how AI is adding a new dimension to the disinformation challenge. Careful critical analysis of sources and content is more important than ever.
Fake news from foreign actors is a serious concern. I appreciate the focus on election integrity and the need to combat AI-driven manipulation on social media.
Absolutely. Staying informed on the latest disinformation tactics is key to maintaining a healthy information ecosystem.
While the rise of AI-driven disinformation is troubling, this article provides a helpful set of tools to identify and combat it. Staying vigilant is key.
Identifying disinformation is crucial, especially with the rise of AI-generated content. These 11 key strategies provide a helpful framework to stay vigilant and discerning online.
This is a timely and important topic. Equipping the public with strategies to spot disinformation, especially around elections, is crucial for democracy.
Absolutely. Maintaining a well-informed citizenry is fundamental to the health of our democratic institutions.
Leveraging AI for disinformation is a worrying trend. I appreciate the practical advice in this article for navigating the online information landscape more safely.
The snippet highlights some concerning trends around AI-generated deepfakes and coordinated spam campaigns targeting elections. Implementing these strategies is crucial for voters.
Agreed. Discernment and media literacy are vital skills in this digital age of rapidly evolving disinformation tactics.