Listen to the article
As regulators worldwide grapple with the rapid proliferation of AI-generated deepfakes, policymakers face critical decisions about how to protect electoral processes without stifling free expression. The challenge has become increasingly urgent as sophisticated artificial intelligence makes it easier to create convincing fake videos, images, and audio that can mislead voters and potentially undermine democratic institutions.
Several fundamental questions should guide any regulatory approach, experts say. These include defining clear goals for regulation, determining which types of content should be covered, establishing appropriate enforcement mechanisms, and deciding who should be subject to these rules.
The most compelling rationale for regulation is the preservation of an informed electorate. As the Supreme Court noted in the landmark 1976 campaign finance case Buckley v. Valeo, “the ability of the citizenry to make informed choices among candidates for office is essential” to a functioning democracy. Deepfakes of candidates or events can directly threaten this fundamental interest.
Safeguarding the electoral process itself represents another powerful justification. Synthetic media that misleads voters about when, where, and how to cast their ballots—or that falsely depicts fraud to undermine confidence in election results—can have devastating consequences. The January 6, 2021 attack on the U.S. Capitol stands as a stark reminder of how electoral disinformation can fuel violence.
Protection of candidates and election workers provides a third rationale, though this must be balanced against the need for robust political discourse. While public figures necessarily accept greater scrutiny, the targeted harassment of election officials has reached alarming levels in recent years. AI-generated content could intensify these threats.
The scope of regulation presents another complex challenge. Some approaches target all “materially deceptive media” regardless of how they were created, while others focus specifically on synthetic content produced using artificial intelligence. A California law passed in 2019, for example, regulates manipulated candidate images distributed within 60 days of an election when intended to deceive voters.
Most experts recommend covering images, videos, and audio, as any of these formats can effectively mislead the public. Text-based synthetic content presents additional considerations, with some states like New York including AI-generated text in proposed regulations.
For regulatory approaches, policymakers have generally pursued two strategies: mandating disclosure or banning certain content outright. Disclosure requirements—such as disclaimers identifying AI-generated content—tend to face fewer constitutional challenges since they don’t prevent speech. Several states and proposed federal legislation like the AI Disclosure Act focus on this approach.
However, some jurisdictions have implemented targeted prohibitions. Texas and Minnesota have enacted bans on certain categories of deepfakes intended to influence elections, with these restrictions limited to specific pre-election windows (30 days in Texas, 90 days in Minnesota). Such prohibitions may be justified for content with little redeeming value, such as deliberate attempts to suppress voting.
The question of who should be regulated remains equally important. While most proposed regulations target candidates, political action committees, and others creating and distributing election-related deepfakes, some experts advocate for platform accountability as well. The bipartisan Honest Ads Act provides one potential model, requiring major online platforms to maintain public records of political ad purchases and ensure appropriate disclaimers.
As elections worldwide face this emerging threat, the challenge for policymakers is clear: develop targeted responses that mitigate harm without unduly restricting legitimate expression. While addressing synthetic media is crucial, experts caution that deepfakes ultimately represent a “threat amplifier” for existing challenges to electoral integrity rather than an entirely new problem.
With major elections approaching in dozens of countries in 2024, the urgency of thoughtful regulation has never been greater. Finding the right balance between protecting democratic processes and preserving free speech will be essential as societies navigate this rapidly evolving technological landscape.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
Interesting to see proposed regulations around deepfakes and synthetic media in politics. Clearly a challenging balance to strike between free speech and protecting electoral integrity. Curious to see what specific guidelines and enforcement mechanisms emerge.
Agreed, it’s a nuanced issue that will require careful consideration. Safeguarding the democratic process is crucial, but overregulation could have unintended consequences.
From a broader commodities perspective, I’m hopeful these regulations can help maintain transparency and prevent market manipulation. Synthetic media is a serious threat that requires a thoughtful regulatory approach.
As an investor, I’m following this topic closely. Deepfakes have huge potential for market manipulation and misinformation. Tight regulations may be needed to maintain trust and fairness in financial markets.
Absolutely, the financial sector is especially vulnerable. Robust verification and transparency measures will be key to mitigate abuse of synthetic media.
From a mining and commodities perspective, I’m concerned about the risks of deepfakes being used to spread false information that could impact commodity prices and investor sentiment. Regulation is a necessary step.
Very true. The mining and energy sectors have a lot at stake when it comes to protecting against malicious use of AI-generated content.
As someone interested in the uranium market, I hope policymakers address the potential for deepfakes to undermine public trust in nuclear energy and fuel supply. Clear guidelines will be important.
This is an important issue for the lithium industry as well. Synthetic media could be used to create false narratives around ESG practices or project developments. Comprehensive regulations are needed.
Agreed. The clean energy transition relies on public confidence, so protecting against misinformation is crucial for sectors like lithium.