Listen to the article
The future of democracy faces a new threat as artificial intelligence tools become increasingly sophisticated, enabling the creation of deceptive content that could potentially sway elections and undermine public trust in political processes.
For years, experts have warned about the potential dangers of AI-generated “deepfakes” – manipulated media designed to mislead voters. Until recently, these synthetic images were often unconvincing and expensive to create, making traditional misinformation tactics more appealing to bad actors.
That technological landscape has dramatically shifted. Today’s generative AI tools can produce convincingly cloned human voices and hyper-realistic images, videos, and audio in seconds at minimal cost. When amplified by powerful social media algorithms, this fabricated content can spread rapidly and target specific audiences with unprecedented precision.
“We’re not prepared for this,” warns A.J. Nash, vice president of intelligence at cybersecurity firm ZeroFox. “To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”
The potential scenarios for election interference are alarming. Voters could receive automated robocalls in a candidate’s voice giving false voting instructions. Audio recordings might surface of a candidate seemingly confessing to crimes or expressing racist views. Video footage could show politicians giving speeches they never delivered. Even local news reports could be faked to claim a candidate has dropped out of the race.
These concerns aren’t merely theoretical. Former President Donald Trump, a 2024 candidate, has already shared AI-generated content with followers on social media, including a manipulated video of CNN host Anderson Cooper created using AI voice-cloning technology.
The Republican National Committee recently released a campaign ad featuring AI-generated images depicting a dystopian future under another Biden presidency, with scenes of Taiwan under attack, economic collapse, and military patrols on American streets. While the RNC acknowledged its use of AI, cybersecurity experts warn that malicious campaigns and foreign adversaries likely won’t be so transparent.
“What happens if an international entity – a cybercriminal or a nation state – impersonates someone? What is the impact? Do we have any recourse?” asks Petko Stoyanov, global chief technology officer at Forcepoint, a cybersecurity company based in Austin, Texas. “We’re going to see a lot more misinformation from international sources.”
Evidence of AI-generated political disinformation is already circulating online ahead of the 2024 election. Examples include a doctored video of President Biden appearing to attack transgender people and fabricated images of Trump’s non-existent mugshot following his Manhattan arraignment.
Legislative efforts to address these concerns are underway. Representative Yvette Clarke (D-N.Y.) has introduced bills that would require campaign advertisements created with AI to be labeled as such and mandate watermarks on synthetic images. Several states have also proposed their own measures to combat deepfakes.
“It’s important that we keep up with the technology,” Clarke told The Associated Press. “We’ve got to set up some guardrails. People can be deceived, and it only takes a split second. People are busy with their lives and they don’t have the time to check every piece of information.”
The political consulting industry is beginning to recognize these dangers. A Washington trade association for political consultants recently condemned deepfakes in political advertising, calling them “a deception” with “no place in legitimate, ethical campaigns.”
Despite these concerns, some campaign strategists see potential benefits in AI technologies. Mike Nellis, CEO of progressive digital agency Authentic, uses ChatGPT daily and has partnered with Higher Ground Labs to develop Quiller, an AI tool designed to write, send, and evaluate fundraising emails for Democratic campaigns.
“The idea is every Democratic strategist, every Democratic candidate will have a copilot in their pocket,” Nellis explained.
As the 2024 election approaches, the race is on to establish effective guardrails and protections against AI misuse while harnessing the technology’s legitimate benefits. The outcome of this technological battle could have profound implications for the health of American democracy and public trust in electoral processes.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
The rise of generative AI is a double-edged sword – it enables amazing creative potential, but also dangerous new forms of misinformation. We need to find the right balance to harness the benefits while guarding against the threats.
Deepfakes and synthetic media pose major risks to election integrity. I hope policymakers and tech companies work quickly to stay ahead of bad actors and implement effective solutions before the next presidential race.
Absolutely. Proactive measures are crucial to prevent the spread of AI-fueled disinformation that could sway voter sentiment and decision-making.
This is a complex and multifaceted issue. While the risks of AI-driven disinformation are serious, I hope solutions can be found that don’t excessively restrict the positive potential of these technologies. Thoughtful, balanced approaches will be crucial.
The prospect of AI-generated deepfakes swaying elections is deeply troubling. We need to act quickly to stay ahead of bad actors and implement effective countermeasures. Restoring public trust in our democratic processes is paramount.
This is a very concerning development. AI-generated disinformation could seriously undermine faith in our democratic institutions if not addressed properly. We need robust safeguards and fact-checking mechanisms to protect against these threats.
I’m curious to learn more about the specific technological breakthroughs that have made AI-generated deepfakes so much more convincing and affordable. Staying on top of these advancements will be key to combating this challenge.
AI-powered disinformation is a complex challenge with no easy answers. Addressing it will require innovative solutions and a collaborative effort between policymakers, tech companies, and the public. I’m cautiously optimistic that the right approach can be found.
Disinformation campaigns that leverage AI are extremely concerning. The integrity of our elections must be protected. I hope policymakers can work with tech companies to develop robust safeguards before 2024.