Listen to the article
Wisconsin Enacts Landmark Law Against AI Deepfakes and Political Misinformation
Wisconsin has taken decisive action against emerging digital threats by passing comprehensive legislation that targets both nonconsensual intimate images and deceptive political advertising in the age of artificial intelligence. The new law establishes critical protections for individuals while mandating transparency in political communications.
The legislation addresses two distinct but equally concerning issues. First, it criminalizes the creation and distribution of nonconsensual intimate images as a misdemeanor. More significantly, using deepfake technology to intimidate, harass, or coerce individuals now constitutes a felony, punishable by up to 3.5 years in prison and fines reaching $10,000.
This measure comes at a crucial time, as approximately 96% of all deepfake material currently circulating online consists of nonconsensual pornography targeting women. The technology has made it increasingly easy to create convincing fake images that can damage reputations, careers, and mental health.
On the political front, the law requires clear disclosures at both the beginning and end of any communication that uses AI-generated or manipulated content. The Wisconsin Elections Commission has been tasked with developing specific implementation guidance. Violations of these disclosure requirements can result in fines up to $1,000 per offense.
“While litigation and interpretation are just beginning, the intent is clear: voters deserve to know what is real,” said Lisa Attonito, Executive Director of the Wisconsin Women’s Fund, an organization monitoring the law’s implementation.
The legislation directly intersects with several priority issues for women’s advocacy groups in the state. Misinformation and disinformation disproportionately impact women, especially those in leadership positions or public life. Deepfakes and other manipulated content can be weaponized to silence, humiliate, and discredit women, distorting narratives about their credibility and lived experiences.
The economic implications are equally concerning. When women are targeted by image-based abuse or misinformation campaigns, they often face serious financial consequences, including job loss, damaged professional reputations, and long-term economic instability. Protection from digital harm has increasingly become recognized as fundamental to women’s economic security.
The Wisconsin Women’s Well-Being Index, which tracks metrics on women’s advancement throughout the state, has identified digital misinformation as an emerging threat to progress. Whether targeting individual women with deepfakes or spreading false narratives about women as a demographic group, such tactics can undermine decades of advancement.
Legal experts note that Wisconsin’s approach represents one of the most comprehensive state-level attempts to address AI-generated content. Unlike some narrower laws in other states that focus only on political advertising or only on nonconsensual intimate images, Wisconsin’s legislation tackles both issues.
Technology policy specialists have praised the dual focus but caution that enforcement will present significant challenges. The rapid evolution of AI technology means that detection methods must continuously advance to identify increasingly sophisticated deepfakes.
Civil liberties groups have generally supported the law’s intent while expressing concerns about potential First Amendment implications, particularly regarding political speech. These aspects will likely be tested through litigation as the law is implemented.
For citizens navigating this changing landscape, Attonito recommends several practical steps: “Pause before sharing content online. Verify sources and assess credibility. Look for required disclosures on political advertisements that may contain AI-generated content. Support credible journalism, and engage respectfully when correcting falsehoods.”
As Wisconsin’s law takes effect, it joins a growing national conversation about the appropriate limits and regulations for artificial intelligence in public discourse. The state’s approach could serve as a model for other jurisdictions grappling with similar challenges at the intersection of technology, personal privacy, and political communication.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
This legislation seems like a positive step in protecting individuals from the harmful effects of deepfakes and political misinformation. Criminalizing nonconsensual deepfake porn is an especially important measure given the disproportionate impact on women.
I agree, the transparency requirements for political ads are also crucial. Deepfakes pose a serious threat to democratic discourse, so this law is an important safeguard.
This law seems like a reasonable attempt to address some serious emerging threats, but there will likely be complex legal and technical challenges in implementation. Protecting against deepfakes is no easy task.
Interesting to see Wisconsin taking the lead on this issue. Deepfakes are a growing concern, so it’s good to see policymakers taking proactive steps to address the risks, both personal and political.
Agreed, this law seems well-designed to balance protecting individual privacy and election integrity. Hopefully other states will follow Wisconsin’s example.
This is an important piece of legislation, but it remains to be seen how effectively it will be enforced. Deepfake technology is advancing rapidly, so staying ahead of the curve will be an ongoing challenge.
That’s a fair point. Enforcement and keeping up with the technology will be critical. Clear disclosure requirements for political ads are a good start, though.
I’m curious to learn more about the specific provisions in the law. What kind of disclosures are required for political ads, and how will compliance be monitored? These details will be important for assessing the law’s real-world impact.