Listen to the article
A year after the 2024 presidential election, the United States faces a sobering reality: technology-driven misinformation campaigns have become deeply embedded in the democratic process, with experts warning the problem will only intensify in future elections.
“We have had much more volume of misinformation, disinformation grabbing the attention of the electorate,” explains Daniel Trielli, assistant professor of media and democracy at the University of Maryland. “And quickly following through that, we see a professionalization of disinformation… The active use of social media platforms to spread disinformation.”
While technology has long influenced election information flows, the convergence of social media and advanced technologies like generative AI has dramatically escalated both the volume and sophistication of false information campaigns over the past five years.
The 2024 presidential election witnessed AI-generated text messages, deepfake videos of candidates, and bot networks spreading false information—all designed to confuse voters and diminish participation. Foreign actors, particularly Russia and China, played significant roles in these efforts.
According to reports, Russia hired right-wing influencers to spread Kremlin talking points on TikTok, created AI-generated videos alleging ballot fraud, and orchestrated hoax bomb threats. Chinese government actors produced AI content promoting conspiracy theories about the U.S. government and targeted down-ballot races.
Tim Harper, project lead for Elections and Democracy at the Center for Democracy and Technology, notes these attacks aim “to influence, not only individual electoral processes, but to scale it in a way that makes it much more difficult to detect.”
Experts differentiate between misinformation—false information shared unintentionally—and disinformation, which involves coordinated, intentional efforts to spread falsehoods. The latter often exploits existing societal divisions.
“They’re very good at finding niche societal fissures in any civilized government,” says Adam Darrah, vice president of intelligence at cybersecurity firm ZeroFox and former CIA intelligence analyst, referring to foreign adversaries like Russia. “They’re like, ‘Okay, let’s have another meeting today about things we can do, to just like, keep Americans at each other’s throat.'”
The problem extends beyond U.S. elections. Ken Jon Miyachi, founder of deepfake detection tool BitMind, points out that AI-generated content significantly influenced recent elections in India, Taiwan, and Indonesia. In Indonesia’s case, the political party Golkar used AI to digitally resurrect former dictator Suharto, who died in 2008, to make political endorsements.
As technology advances, detecting synthetic content becomes increasingly difficult. “In the earliest days of generative AI, fake content was easier to spot,” Miyachi explains. “Everyone knew that an extra finger or unrealistic background meant you were probably looking at a deepfake. But with better technology, generated content is spreading undetected like wildfire.”
The evolution of content moderation policies on major platforms has exacerbated the problem. Many platforms relaxed their misinformation policies during the 2024 election, fearing accusations of political bias. Meta abandoned fact-checking and hate speech policies after Donald Trump’s victory, while X and YouTube reduced their misinformation flagging systems.
Looking ahead to the 2026 midterm elections, experts express grave concern about the Trump administration’s rollback of election security measures. The administration has reduced resources for the Cyber and Infrastructure Security Agency (CISA), cut funding for the Elections Information Sharing and Analysis Center, and downsized the National Counterintelligence and Security Center.
“There are a number of ways across the federal government where resourcing and capacity for cybersecurity and information sharing has been depleted this year,” Harper warns. “All that is to say we’re seeing that AI-based and boosted mis- and disinformation campaigns may take off in a much more serious way in coming years.”
This reduction in federal security resources has already eroded trust among state election officials. In June, when Iranian hackers successfully breached Arizona’s Secretary of State website, Secretary Adrian Fontes chose not to report the incident to CISA. Arizona senators later expressed concern that state officials no longer trust federal agencies during cyberattacks.
Harper predicts the 2026 midterms will more closely resemble the 2016 election than 2024, as bad actors may “feel more empowered to meddle” given the withdrawal of federal safeguards. Meanwhile, Miyachi anticipates that disinformation tactics will become even more sophisticated as AI technology continues to advance.
“Bad actors have understood what works and what doesn’t work,” Miyachi cautions. “It will be much more sophisticated going into the 2026 midterms and then the 2028 election.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
The volume and sophistication of these misinformation campaigns is alarming. We must remain vigilant and support efforts to combat the spread of false information.
The professionalization of disinformation campaigns is worrying. We must stay vigilant and support efforts to combat the spread of false information online.
Absolutely. Strengthening digital literacy and fact-checking initiatives could help voters navigate the information landscape more critically.
The convergence of social media and advanced technologies like AI has clearly escalated the spread of false information. This is a complex issue that requires a multifaceted approach.
The threat of misinformation in elections is concerning, especially with the rise of advanced AI technologies. Ensuring secure and trustworthy information for voters is crucial for democracy.
I agree, we need robust safeguards to protect the integrity of our electoral process. Transparency and education will be key to combating these challenges.
While technology has long influenced election information flows, the current situation is unprecedented. We need to find innovative solutions to protect the integrity of our elections.
I agree. Addressing this challenge will require collaboration between policymakers, tech companies, and the public to ensure the fairness and transparency of our electoral process.
The role of foreign actors in these misinformation efforts is concerning. Protecting our elections from external interference should be a top priority.
I agree, maintaining the integrity of our elections is crucial for our democracy. We need robust security measures and international cooperation to address this challenge.
Voter participation is key to a healthy democracy, so it’s worrying to see misinformation campaigns designed to diminish it. We need to find ways to empower and engage citizens.
Absolutely. Fostering greater civic engagement and providing reliable information to voters should be a top priority for policymakers and tech companies.