Listen to the article

0:00
0:00

After a year marked by several experimental uses of artificial intelligence in political campaigning, experts are warning that AI-driven misinformation could become a significant threat in the upcoming 2026 midterm elections.

The 2025 off-year elections saw multiple candidates employing AI in controversial ways. In Virginia, Republican lieutenant governor candidate John Reid staged a debate with an AI-generated version of his Democratic opponent Ghazala Hashmi after she declined his debate requests. Hashmi ultimately won the race despite this unusual tactic.

In New York, former Governor Andrew Cuomo briefly published and then removed a deepfake advertisement targeting his opponent Zohran Mamdani with racist stereotypes. Mamdani went on to win the New York City mayoral race despite this incident.

Utah’s Lieutenant Governor Deirdre Henderson also confronted AI misuse when she warned voters about fake election results circulating online before polls had even closed.

“We’ve only seen the tip of the iceberg when it comes to AI’s impact on elections,” said Isabel Linzer, a policy analyst at the Center for Democracy and Technology (CDT). “The tech is getting better and politicians—and bad actors—are getting more comfortable using it. Anyone who thought the danger had passed after last year’s U.S. election avoided any major AI incidents needs to wake up.”

State legislatures have been actively responding to these emerging threats. According to the National Conference of State Legislatures (NCSL), 26 states have already enacted laws regulating political deepfakes, either banning them outright or requiring clear disclosure when such content is used in campaigns.

Chelsea Canada, a program principal at NCSL, identified regulating deepfakes as a “huge trend” at the state level. Speaking at the National Association of State Chief Information Officers’ annual conference in Denver earlier this year, she noted that lawmakers are increasingly focused on both immediate concerns and potential long-term effects of AI in elections.

Recent research from CDT highlighted multiple risks associated with generative AI in political campaigns, including the amplification of disinformation, facilitated foreign interference, and automated voter suppression. While the technology offers potential benefits for data analysis, drafting communications, and debate preparation, researchers warn the risks significantly outweigh these advantages.

Tim Harper, CDT’s senior policy analyst for elections and democracy and co-author of the report, explained that self-regulation based on anticipated voter backlash was the primary constraint on AI misuse in 2024. “What we’re seeing is that those norms, those self-imposed beliefs that voters would penalize candidates are crumbling,” Harper said.

He pointed to various AI-generated videos shared by the White House and former President Donald Trump on social media, including content that denigrated protesters and political opponents. While some condemned these materials, the response has been far from universal.

The regulatory approach has focused primarily on transparency rather than prohibition. Harper emphasized that legal frameworks represent just one piece of a complex puzzle: “It’s not purely a question of law, but also a question of building societal resilience.”

Harper also noted that relying on campaigns to exercise good judgment presents challenges, as the incentives for provocative content remain strong while disincentives are diminishing. Social media and AI companies could potentially intervene, but Harper observed that “the political incentives for the companies to act in this space are not strong right now.”

With high-profile gubernatorial races on the horizon in 2026, concerns about AI-driven deception continue to grow. As the guardrails of self-restraint weaken and regulatory frameworks focus primarily on disclosure rather than prevention, experts predict an escalation of AI misuse in political campaigns.

“The norms that we saw in 2024 definitely are beginning to erode, and that’s happening in a bipartisan way right now,” Harper concluded. “We expect this to continue to escalate into 2026.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

11 Comments

  1. This is a concerning development that requires urgent attention. The impact of AI-driven disinformation on elections could be devastating for democracy. Robust safeguards, transparency, and public education will be essential to mitigate these risks.

    • I agree. Voters must be empowered to critically evaluate the information they encounter, especially when it involves the use of advanced technologies like AI.

  2. James S. Miller on

    The examples highlighted are quite troubling. Using AI to falsely impersonate candidates or spread misinformation is a serious violation of democratic norms. Election officials and nonpartisan watchdog groups will be critical in combating these tactics.

    • Absolutely. Transparency and public awareness will be key to limiting the impact of AI-driven disinformation on elections.

  3. John W. Miller on

    While AI can offer benefits in certain electoral applications, the misuse of these technologies to spread disinformation or falsely impersonate candidates is completely unacceptable. Maintaining the integrity of our elections should be the top priority for policymakers and tech companies.

  4. Amelia Williams on

    This is a concerning trend that could undermine the integrity of our elections. While AI can be a powerful tool, it also carries risks if misused for disinformation or manipulation. Voters will need to be vigilant in verifying information sources and fact-checking claims.

    • Oliver Johnson on

      I agree, the potential for abuse is worrying. Regulators and tech companies will need to stay on top of these issues to protect the democratic process.

  5. This is a worrying development that requires serious attention. The integrity of elections is fundamental to democracy, and AI-powered disinformation poses a serious threat. I hope policymakers and tech companies can work together to address these challenges before they escalate further.

  6. It’s concerning to see the increasing sophistication of AI-based electoral interference tactics. While innovation in campaigning is expected, the misuse of these technologies to mislead voters is unacceptable. Robust safeguards and public education will be essential going forward.

  7. Jennifer K. Thompson on

    The examples cited are deeply troubling and underscore the urgent need to address the potential for AI to be weaponized against the democratic process. Voters must remain vigilant and rely on authoritative, nonpartisan sources when evaluating electoral information.

    • Jennifer Moore on

      Well said. Strengthening digital literacy and media critical thinking skills among the public will be crucial to combating these threats.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.