Listen to the article

0:00
0:00

In the wake of the 2024 presidential election, the United States faces a sobering new reality: technology-enhanced misinformation campaigns have become a permanent fixture in the democratic process, according to experts studying the intersection of technology and elections.

Daniel Trielli, an assistant professor of media and democracy at the University of Maryland, notes that while technology has always influenced information flow during elections, recent developments have dramatically changed the landscape. The early 2000s saw everyday citizens gain publishing power through the internet, but social media platforms and generative artificial intelligence have accelerated the problem to unprecedented levels in the past five years.

“We have had much more volume of misinformation, disinformation grabbing the attention of the electorate,” Trielli explained. “And quickly following through that, we see a professionalization of disinformation… The active use of these social media platforms to spread disinformation.”

This professionalization manifested in multiple forms during the 2024 election: bot networks spreading false information, AI-generated text messages, and synthetic photos and videos of candidates designed to confuse voters and foster apathy. As technology continues to advance, experts warn that the 2026 midterm elections will likely face even greater challenges, particularly as the Trump administration dismantles security-focused programs.

Tim Harper, project lead for Elections and Democracy at the Center for Democracy and Technology, highlighted the sophisticated nature of these operations: “We’ve seen reporting that the goal of those sorts of attacks is to seek to influence not only individual electoral processes but to scale it in a way that makes it much more difficult to detect.”

Experts differentiate between misinformation—false information shared without malicious intent—and disinformation, which involves coordinated efforts to spread lies for political advantage. While technology makes disinformation easier to produce and distribute, Trielli points out that the problem wouldn’t exist without an audience primed to believe it.

Adam Darrah, vice president of intelligence at cybersecurity platform ZeroFox and former CIA intelligence analyst, observed that much of the public unwittingly participated in spreading misinformation during the 2024 election. “A lot of misinformation plays to longstanding tropes or stereotypes,” Darrah said, adding that foreign adversaries like Russia excel at exploiting societal divisions.

“They’re very good at finding niche societal fissures in any civilized government,” Darrah explained. “They’re like, ‘Okay, let’s have another meeting today about things we can do to just keep Americans at each other’s throats.'”

The 2024 presidential election saw Russia reportedly hiring right-wing influencers to spread Kremlin messaging on TikTok, creating AI-generated videos alleging election fraud, and orchestrating hoax bomb threats. China also produced AI-generated content promoting conspiracy theories about the U.S. government and targeted down-ballot races.

But the problem extends globally. Ken Jon Miyachi, founder of deepfake detection tool BitMind, noted that AI-generated content significantly impacted elections in India, Taiwan, and Indonesia, where one political party used AI to resurrect deceased dictator Suharto for endorsements.

“I think it’s more important than ever, especially with the midterms coming up and then the next election cycles, and even just world conflict, world news,” Miyachi said. “You really need a more proactive, real-time strategy to be able to combat misinformation and identify it.”

Changing content moderation policies on social media platforms compounded the problem during the 2024 election. Many platforms relaxed their standards around election misinformation, with Meta abandoning fact-checking and hate speech policies after Trump’s victory. The Trump administration has framed misinformation identification as suppression of conservative speech.

Looking ahead to 2026, experts express concern about the administration’s reduction of cyber defenses. The Office of the Director of National Intelligence is scaling back counterintelligence operations, the White House is downsizing the Cybersecurity and Infrastructure Security Agency (CISA), and funding has been cut for the Elections Information Sharing and Analysis Center.

“There are a number of ways across the federal government where resourcing and capacity for cybersecurity and information sharing has been depleted this year,” Harper said. “All that is to say we’re seeing that AI-based and boosted mis- and disinformation campaigns may take off in a much more serious way in coming years.”

The consequences are already visible. When Iran successfully hacked Arizona’s Secretary of State website in June, changing candidate photos to images of Ayatollah Khomeini, Secretary Adrian Fontes didn’t report the incident to CISA. Arizona senators later expressed concern about state officials’ diminishing trust in federal agencies.

While some states have passed laws regulating AI in elections, Miyachi argues that digital misinformation requires global coordination. Harper believes the 2026 midterms may resemble the 2016 election more than 2024, as the withdrawal of federal resources could embolden malicious actors.

“Bad actors have understood what works and what doesn’t work,” Miyachi warned. “It will be much more sophisticated going into the 2026 midterms and then the 2028 election.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

16 Comments

  1. Patricia Davis on

    This is a concerning trend that highlights the urgency of addressing vulnerabilities in our electoral systems. Strengthening cybersecurity, improving digital literacy, and enhancing transparency around online political ads should be top priorities.

  2. This highlights the need for robust election security measures that can adapt to evolving digital threats. Investing in cybersecurity and media literacy seems crucial to protecting the integrity of our democratic process.

    • James Rodriguez on

      Well said. Maintaining public trust in elections is vital, so finding the right balance between technological safeguards and user empowerment will be key.

  3. Jennifer Davis on

    The article highlights the urgent need to address the vulnerabilities in our election systems. Developing more robust cybersecurity measures, improving digital literacy, and enhancing transparency around online political ads should be top priorities for policymakers and tech companies.

    • Lucas O. Lopez on

      Agreed. Safeguarding the democratic process in the face of evolving digital threats requires a comprehensive, multi-stakeholder approach that adapts to new challenges over time.

  4. It’s worrying to see how AI and social media have amplified the spread of false information during elections. Developing more effective content moderation tools and public education campaigns will be vital going forward.

  5. Amelia Rodriguez on

    The article underscores the ongoing challenge of combating misinformation and disinformation, especially in the context of elections. Developing more effective content moderation strategies and public education campaigns will be critical to preserving the integrity of our democratic process.

    • Noah Z. Martin on

      Well said. Staying ahead of the evolving tactics used to spread false information online requires a multi-faceted approach and sustained commitment from various stakeholders.

  6. Elizabeth Jones on

    Interesting article on the challenges of election cybersecurity. The role of AI and social media in spreading misinformation is definitely concerning. I wonder what specific policies or initiatives could help address these issues going forward.

    • Isabella I. Jackson on

      Agreed, it’s a complex problem without easy solutions. Improved content moderation and digital literacy education could be a start, but ongoing vigilance will be required.

  7. This article underscores the importance of addressing the vulnerabilities in our election systems. The growing sophistication of misinformation tactics is alarming and demands a comprehensive, multi-stakeholder response.

    • Patricia Brown on

      Absolutely. Strengthening cybersecurity, improving digital literacy, and enhancing transparency around online political ads are all crucial steps that need to be taken.

  8. Robert T. Hernandez on

    This is a complex and concerning issue that deserves careful attention. Protecting the integrity of elections in the face of sophisticated misinformation campaigns will require innovative solutions and ongoing vigilance.

  9. The professionalization of disinformation campaigns is really concerning. Combating AI-generated content and bot networks will require innovative approaches from tech companies, policymakers, and the public.

  10. Olivia Thompson on

    The article raises important points about the evolving threat landscape for election security. Proactive measures to counter sophisticated disinformation tactics are clearly needed to safeguard democratic processes.

    • Agreed. This is an issue that requires sustained, coordinated efforts from technology companies, government agencies, and civil society organizations.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.