Listen to the article
Massachusetts lawmakers are set to address the growing concern of artificial intelligence in political campaigns with new legislation aimed at curbing AI-generated misinformation during elections.
The Massachusetts House plans to vote Wednesday on two bills targeting deceptive AI content in political advertising. The legislation comes just one week after the Massachusetts State House News Service reported on a controversial AI-assisted political advertisement from gubernatorial candidate Brian Shortsleeve.
The primary bill, a redraft of legislation originally filed by Representative Tricia Farley-Bouvier, would prohibit candidates from distributing “materially deceptive audio or visual media” with malicious intent to harm a candidate’s reputation or mislead voters about election procedures. The prohibition would apply to content distributed within 90 days of an election.
“As artificial intelligence continues to reshape our economy and many aspects of our daily lives, lawmakers have a responsibility to ensure that AI does not further the spread of misinformation in our politics,” said House Speaker Ron Mariano and House Ways and Means Chair Aaron Michlewitz in a joint statement.
The proposed legislation includes key exemptions, notably for content that constitutes satire or parody. This exception has drawn particular attention following the Shortsleeve campaign ad, which his team characterized as parody. The bill would also protect news organizations that air or report on such advertisements, provided they acknowledge questions about the content’s authenticity.
A second bill, originally filed by Minority Leader Brad Jones and subsequently redrafted by the Ways and Means Committee, addresses “synthetic media” specifically designed to influence voting. This legislation would require clear disclosures at both the beginning and end of AI-generated content used in political campaigns, explicitly stating that artificial intelligence was used in its creation.
The bill would establish penalties of up to $1,000 for violations and would formally define “generative artificial intelligence” in Massachusetts law for the first time.
These legislative efforts reflect growing national concern about AI’s role in political discourse. As generative AI technology has become more sophisticated and accessible, election officials and lawmakers across the country have expressed alarm about its potential to create convincing but fabricated content that could mislead voters.
The timing of this legislation is significant as Massachusetts prepares for its 2024 election cycle, which includes not only presidential voting but also several high-profile state races, including the gubernatorial contest in which Shortsleeve is a candidate.
Massachusetts joins several other states considering or implementing regulations on AI in political advertising. California enacted similar legislation last year, while federal proposals have stalled in Congress amid broader debates about regulating artificial intelligence.
Media experts note that the challenge for lawmakers is balancing legitimate concerns about election integrity with free speech protections and the practical difficulties of enforcing restrictions on rapidly evolving technology.
The Massachusetts bills represent a targeted approach by focusing specifically on deceptive content created with malicious intent rather than attempting to regulate all AI-generated political content. However, questions remain about how authorities will determine what constitutes “materially deceptive” content or distinguish between genuine parody and deliberate deception.
If passed, the legislation would make Massachusetts one of the first states to establish comprehensive rules governing AI in political campaigns ahead of the 2024 election cycle. The House vote scheduled for Wednesday will determine whether these proposals advance to the Senate for further consideration.
As AI technology continues to evolve, these legislative efforts highlight the growing tension between technological innovation and electoral integrity that lawmakers across the country are increasingly being forced to address.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
Interesting legislation targeting AI-generated political misinformation. Seems like a necessary step to protect the integrity of elections in the digital age. Curious to see how it’s implemented and enforced in practice.
Absolutely, regulating the use of AI in political ads is critical. The potential for abuse and manipulation is very concerning.
I’m skeptical about the effectiveness of this legislation. Regulating AI-based content is notoriously difficult, and there are concerns about potential overreach. Hope they find the right balance.
That’s a fair point. Implementing this type of law will require nuance and careful consideration to avoid unintended consequences.
As AI capabilities continue to advance, this type of legislation will likely become more common. Curious to see how it plays out in Massachusetts and if other states follow suit.
Agreed. The spread of misinformation is a growing concern, and lawmakers will need to stay vigilant in addressing the challenges posed by emerging technologies.
It’s good to see lawmakers taking the threat of AI-driven misinformation seriously. Proactive steps like this can help strengthen the integrity of our democratic process.
Absolutely. Maintaining public trust in elections is crucial, and this legislation seems like a step in the right direction.
This is an important issue that needs to be addressed. Misinformation can have serious consequences for the democratic process. Glad to see Massachusetts taking action on this.
Agreed. Tackling AI-generated misinformation is a complex challenge, but crucial for maintaining trust in our elections.