Listen to the article
Canadian fiddler Ashley MacIsaac is considering legal action against Google after false criminal allegations generated by the tech giant’s AI Overview led to the cancellation of his scheduled performance at Sipekne’katik First Nation in Nova Scotia.
The incident occurred when MacIsaac was set to perform on December 19, but organizers abruptly cancelled the show after discovering online claims that incorrectly linked the musician to serious criminal convictions. These allegations, which appeared in Google search results, falsely stated that MacIsaac had been convicted of internet luring and sexual assault.
“The chief messaged back and said, ‘We can’t have you in our community due to your past criminal convictions,’ and I thought, ‘What are they talking about? I got arrested once for smoking marijuana,'” MacIsaac told CTV News.
Upon investigation, it was revealed that Google’s AI Overview had confused the Cape Breton musician with another individual in Atlantic Canada who shares the last name MacIsaac. The search results erroneously claimed that the fiddler had committed multiple offenses, including assaulting a woman and attempting to assault a minor. The AI overview also falsely stated that MacIsaac was listed on the national sex offender registry.
MacIsaac learned of these false allegations approximately a week before his scheduled performance. He promptly filed a report with Google, which subsequently corrected the misinformation in its AI Overview results.
“You are being put into a less secure situation because of a media company — that’s what defamation is,” MacIsaac told The Canadian Press. “If a lawyer wants to take this on (for free) … I would stand up because I’m not the first and I’m sure I won’t be the last.”
The incident has quickly gained attention across Canada, with MacIsaac reporting that several law firms have already reached out offering to represent him pro bono. The musician indicated he lacks the financial resources to fund what could become a lengthy legal battle.
“I want to go on the record to make it clear to people that this is an AI mistake, and that if it comes to it, we will have to go all the way to whatever courts are necessary,” MacIsaac stated.
In response to the controversy, Google Canada spokesperson Wendy Manton issued a statement acknowledging that the platform’s AI results are continuously updated to improve accuracy. She confirmed that the false claims against MacIsaac have been removed.
“When issues arise — like if our features misinterpret web content or miss some context — we use those examples to improve our systems, and may take action under our policies,” Manton explained.
The incident highlights growing concerns about the reliability of AI-generated information and the potential real-world consequences of algorithmic errors. For MacIsaac, the impact was immediate and significant—damaging his reputation and resulting in lost income from the cancelled performance.
Stuart Knockwood, the Sipekne’katik First Nation’s executive director, has publicly apologized to MacIsaac, confirming that the decision to cancel the show was based entirely on the incorrect information provided by Google’s AI.
“We deeply regret the harm this caused to your reputation and livelihood,” the statement read. “Chief and council value your artistry, contribution to the cultural life of the Maritimes, and your commitment to reconciliation.”
While MacIsaac plans to reschedule his concert at the First Nation community, he indicated he needs time for accurate information to circulate. “I don’t feel comfortable about going there right now because I don’t think the proper information can be disseminated within a week,” he told CBC News.
This case emerges as the Canadian music industry grapples with the broader implications of artificial intelligence. Artists, music companies, and trade organizations across the country are increasingly assessing both the opportunities and dangers presented by AI technologies, with MacIsaac’s experience serving as a stark example of the potential for harmful misinformation.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
Disappointing to see a high-profile tech company like Google allow such egregious errors in their AI systems. Fiddler MacIsaac is right to take legal action – false criminal allegations can have devastating impacts, and the responsibility lies with the platform that propagated them.
This is a cautionary tale about the dangers of over-relying on AI without proper safeguards. While the technology has incredible potential, cases like this show how easily it can be misused to spread misinformation and ruin reputations. Kudos to MacIsaac for fighting back.
This is a concerning case of AI-generated misinformation causing real harm. It’s crucial that tech companies take responsibility for the accuracy of their systems and correct errors quickly before they spiral. I hope MacIsaac is able to get justice and prevent this from happening to others.
Absolutely, the spread of false information online can have devastating consequences. Tech firms need rigorous testing and oversight to ensure their AI outputs are factual and avoid unfair reputational damage.
This is an important lesson on the limitations of AI and the need for rigorous human oversight, especially when it comes to information about individuals. I hope MacIsaac is successful in holding Google accountable and that this spurs wider reforms to improve AI accuracy and accountability.
While AI has immense potential, this incident is a sobering reminder that the technology is still fallible and can have serious real-world consequences if not properly managed. MacIsaac is right to challenge Google – AI systems must be held to high standards of accuracy and accountability.
This is an unfortunate example of how even reputable tech companies can struggle to ensure the accuracy of their AI systems. I hope MacIsaac’s case prompts a wider industry reckoning on the need for stronger safeguards and oversight to protect individuals from such damaging misinformation.
Agreed. The proliferation of AI-generated content is a double-edged sword, and cases like this illustrate the critical importance of robust fact-checking and human review to catch errors before they cause real harm.
It’s alarming to see how easily AI-generated misinformation can disrupt lives and livelihoods. MacIsaac’s case highlights the urgent need for better regulation and transparency around the use of these technologies. I wish him the best in his fight to clear his name.
Absolutely. Cases like this demonstrate the vital importance of establishing clear guidelines and safeguards to prevent the misuse of AI, especially when it comes to sensitive personal information. Kudos to MacIsaac for taking a stand.
Mistaken identity issues are always tricky, but for a tech giant like Google to spread such damaging falsehoods is unacceptable. I’m glad MacIsaac is pushing back – these AI systems need to be held accountable for their mistakes, especially when they impact real people’s lives.
Agreed, it’s crucial that AI-powered systems are thoroughly vetted before being used to make consequential decisions. Relying on unproven tech to spread information about individuals is a recipe for disaster.