Listen to the article

0:00
0:00

The growing threat of AI-generated deepfakes in politics has reached alarming levels, as artificial intelligence technology makes it increasingly difficult to distinguish fact from fiction online. This issue came into sharp focus recently when a widely circulated image of former President Donald Trump using a walker proved to be completely fabricated.

“Despite wishing otherwise, I had to tell my friend it was not real,” explains Mario Nicolais, who works with the anti-Trump organization The Lincoln Project. The AI-generated image was designed to make Trump appear older and more frail than he actually is, illustrating how sophisticated these manipulations have become.

What once existed primarily in obscure corners of the dark web has now flooded mainstream social media platforms, creating a crisis of credibility in political discourse. The Lincoln Project has experienced this problem firsthand, as bad actors have targeted the organization’s prominent members.

Rick Wilson, a well-known Lincoln Project contributor with millions of followers across various platforms, recently discovered multiple AI-generated videos mimicking his likeness. These sophisticated fakes featured entire 15-30 minute segments of content Wilson never created or recorded.

“Everything about the videos seemed just a little off,” Nicolais notes. “Facial features too smooth; the voice just a pitch high; movements that were sometimes jumpy as if pixelating.” Yet the quality was convincing enough that casual observers—especially those who might play podcasts in the background while working—could easily mistake the content as authentic.

The rapid proliferation of these deepfakes stands in stark contrast to the sluggish response from major platforms. YouTube, for instance, offers only a standard online form for complaints with little room for explanation. “Worse, it gets sent off into an abyss and whether or not anything comes of it, they do not notify you,” Nicolais says. In the best scenarios, flagged videos might disappear after several weeks—long after potential damage has been done.

This problem extends well beyond The Lincoln Project. Devin “Legal Eagle” Stone, one of YouTube’s top legal content creators, has publicly expressed his frustration over similar issues. The pattern is clear: anyone who has developed a substantial following becomes a target for AI impersonation.

The personal and financial implications for content creators are significant, but the broader impact on political discourse may be even more concerning. Americans already harbor deep distrust toward politicians and candidates; the introduction of convincing fake content only exacerbates this skepticism.

Ironically, much of this crisis stems from deliberate efforts to undermine trust in traditional news sources. Right-wing media figures have spent decades criticizing “mainstream media,” while Trump amplified these sentiments by labeling critical reporting as “fake news”—a term he appropriated from journalists who were actually working to debunk conspiracy theories circulated by his supporters.

As audiences migrated away from established news outlets to social media platforms, the guardrails against misinformation began to collapse. Initially, platforms invested heavily in combating false information. More recently, however, many have retreated from these efforts after facing accusations of “censorship.”

“Now we find ourselves drowning in a sea of inaccuracy and misleading posts,” Nicolais warns. Bad actors—whether motivated by profit or political manipulation—operate with virtually no accountability. Recent research from MIT compounds these concerns, finding that AI chatbots can be more persuasive than political advertisements while simultaneously spreading misinformation.

There are some encouraging developments. Colorado’s Secretary of State’s office has begun acknowledging the danger of deepfakes and implementing protective measures. But until comprehensive solutions emerge, voters will continue facing an increasingly complex information landscape with fewer reliable guides to help navigate it.

As election cycles approach, the ability to distinguish between authentic and fabricated content will become an essential skill for an informed electorate—one that many Americans are currently ill-equipped to exercise.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

9 Comments

  1. Olivia Hernandez on

    This is a worrying development. As AI technology advances, the ability to create highly convincing fake content is becoming a major challenge for fact-checking and media literacy. We’ll need innovative solutions to maintain trust and credibility in the digital age.

    • Agreed, this issue has serious implications for political discourse and public trust. Robust AI detection tools and media education will be crucial to combat the spread of misinformation.

  2. The rise of AI-generated deepfakes is deeply concerning. This technology can be abused to create misinformation and undermine factual reporting. We need strong regulations and transparency measures to address this emerging threat to democracy.

    • Absolutely. Policymakers and tech companies must work together to develop effective safeguards and standards to limit the misuse of these powerful AI tools.

  3. Elizabeth Johnson on

    This is a complex issue with no easy answers. As AI continues to advance, the potential for misuse and manipulation of information will only grow. We need a multifaceted response to address the technical, ethical, and societal dimensions of this challenge.

  4. I’m curious to learn more about the specific AI techniques being used to generate these deepfakes. What are the latest advancements in this field, and how can we leverage AI to build better detection systems?

  5. This news highlights the importance of developing robust AI systems for detecting and combating synthetic media. As these technologies become more advanced, we’ll need innovative solutions to maintain public trust and the integrity of online information.

  6. While the threat of AI-powered deepfakes is real, I’m optimistic that continued research and collaboration between technologists, policymakers, and civil society can lead to effective solutions. What are your thoughts on the most promising approaches?

  7. The impact of AI-generated deepfakes on political discourse is deeply concerning. We must find ways to empower citizens to critically evaluate online content and resist the spread of misinformation. Enhancing media literacy will be key.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved.