Listen to the article
Political consultant Steve Kramer announced Tuesday he will defy a federal court order requiring him to pay $22,500 to three voters for sending AI-generated robocalls that mimicked President Joe Biden’s voice before the New Hampshire presidential primary.
The ruling, issued Friday in a civil lawsuit brought by the League of Women Voters, comes five months after Kramer was acquitted of criminal charges related to voter suppression and impersonating a candidate. The court entered a default judgment after Kramer failed to respond to the civil case.
“I never responded to them because I was already acquitted on 22 counts,” Kramer said in an email, dismissing the lawsuit as “a publicity stunt” that wasted court resources. This defiance extends beyond the civil case—Kramer has also refused to pay a $6 million fine imposed by the Federal Communications Commission for the same incident.
The 56-year-old New Orleans consultant admitted to orchestrating the controversial calls, which reached thousands of New Hampshire Democrats just two days before the state’s January 23 primary. The messages featured an AI-generated voice mimicking Biden that told voters: “It’s important that you save your vote for the November election. Your votes make a difference in November, not this Tuesday.”
Despite Kramer’s criminal acquittal, U.S. District Judge Steven McAuliffe not only ordered the $7,500 payments to each plaintiff but also imposed a nationwide ban on Kramer engaging in similar conduct. Caren Short, director of legal and research at the League of Women Voters, called the ruling a “critical precedent against the weaponization of artificial intelligence in elections.”
Courtney Hostetler of Free Speech for People, which provided legal assistance to the League, warned of further legal action if Kramer refuses to comply. “Mr. Kramer has shown a consistent disregard for the law and the rights of voters,” she said. “His plan to defy the court’s order continues this pattern, and reinforces the importance of the injunction and the damages award.”
During his criminal trial in Belknap County Superior Court in June, Kramer testified that his actions were intended as a “wake-up call” about AI’s potential dangers in political campaigns. He claimed he paid a New Orleans magician $150 to create the recording after becoming concerned about the increasing use of AI in campaigns and the lack of regulations.
“This is going to be my one good deed this year,” Kramer testified, framing his actions as a public service rather than voter suppression.
Kramer’s defense relied partly on the unusual status of New Hampshire’s primary, which he characterized as a “meaningless straw poll.” The primary was held despite the Democratic National Committee’s attempt to dislodge New Hampshire from its traditional first-in-the-nation primary status. Biden did not place his name on the ballot but won as a write-in candidate, and the state’s delegates were ultimately seated at the Democratic National Convention.
The case highlights the growing tensions around AI regulation in politics. Lingo Telecom, the company that transmitted Kramer’s calls, agreed to pay $1 million in a settlement with the FCC in August 2024. However, the regulatory landscape may be shifting. The FCC was developing AI-related rules when Donald Trump won the presidency, but there are indications of a potential move toward lighter regulation under the incoming administration.
Several states have enacted legislation targeting the use of AI-generated content mimicking candidates in political campaigns, but the Trump administration is reportedly considering pressuring states to reduce AI regulations. Supporters of deregulation argue that strict rules could hamper innovation, while critics warn about the risks of allowing AI companies to operate with minimal oversight.
The controversy has prompted broader concerns about AI in elections. On Tuesday, attorneys general from 36 states, including New Hampshire, sent a letter to Congress opposing any federal preemption of state laws addressing AI risks, signaling a looming battle between state and federal authorities over AI regulation in the political sphere.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
I’m curious to learn more about the specific technology used to create these AI-generated robocalls. What are the implications for the future of political campaigning and voter engagement?
That’s a great question. The rapid advancements in AI-driven voice synthesis are certainly worrying from a democratic standpoint. We’ll need to stay vigilant and find ways to combat the misuse of these technologies.
This case highlights the urgent need for robust regulations around the use of AI in political campaigns and messaging. We must protect voters from being misled by these manipulative tactics.
I agree, clear guidelines and enforcement mechanisms are crucial to maintain trust in our democratic processes. This is a complex issue that will require careful consideration by policymakers.
I’m concerned about the broader implications of this incident. If left unchecked, the use of AI to impersonate political figures could become a widespread problem, undermining the credibility of our electoral system.
It’s disappointing to see this consultant defy the court order. Impersonating a candidate through AI is a clear violation of ethical and legal boundaries. I hope the authorities are able to enforce the ruling and hold him accountable.
This is a concerning development. Using AI to impersonate political candidates and spread misinformation undermines the integrity of our elections. I hope the courts can hold this consultant accountable for his actions.
Absolutely, this kind of behavior should not be tolerated. It’s critical that we protect the democratic process from these types of manipulative tactics.