Listen to the article
Canada Braces for AI Deepfake Threats with U.S. Emerging as Major Concern
Canadian officials are expressing heightened alarm about increasingly sophisticated AI-generated content affecting elections, with researchers warning the nation has reached a critical point where distinguishing between real and fake content is becoming nearly impossible.
“We are approaching that place very quickly,” said Brian McQuinn, associate professor at the University of Regina and co-director of the Centre for Artificial Intelligence, Data and Conflict, during a recent assessment of the growing threat.
McQuinn warned that the United States is rapidly becoming a primary source of such deceptive content—a threat that could intensify during potential future independence movements in Quebec and Alberta, regions already drawing attention from U.S. government officials and media.
“We are 100 percent guaranteed to be getting deepfakes originating from the U.S. administration and its proxies, without question,” McQuinn stated. “We already have, and it’s just the question of the volume that’s coming.”
The concerns emerged during a House of Commons committee hearing on foreign election interference Tuesday, where Prime Minister Mark Carney’s national security advisor Nathalie Drouin affirmed Canada’s expectation that the U.S., like all foreign nations, would refrain from interfering in Canadian domestic affairs.
Deputy Foreign Affairs Minister David Morrison, who serves alongside Drouin on the Critical Election Incident Public Protocol Panel, acknowledged the government’s serious concerns about artificial intelligence. “I do know that the government is very concerned about AI and the potentially pernicious effects,” Morrison said, though he stopped short of endorsing content labeling requirements, noting the challenges of positioning government as an arbiter of truth.
The federal government is currently developing legislation addressing online harms and AI-related privacy issues, though details remain unclear about how the bill might combat disinformation.
Drouin highlighted that Justice Marie-Josée Hogue’s public inquiry into foreign interference concluded that disinformation represents the greatest threat to Canadian democracy, particularly with the emergence of generative AI technologies. Addressing this threat is “an endless, ongoing job,” Drouin emphasized. “It never ends.”
In response, the Privy Council Office has begun providing information sessions to parliamentarians about deepfakes, with additional sessions planned for political parties in coming weeks.
Experts Call for Proactive Approach
Security experts argue these briefings are long overdue and insist on a more comprehensive educational approach for politicians, their staff, and the general public.
“There should be annual training, not just on deepfakes and disinformation, but foreign interference altogether,” said Marcus Kolga, senior fellow at the Macdonald-Laurier Institute and founder of DisinfoWatch. “This needs leadership. Right now, I’m not seeing that leadership, but we desperately need it because all of us can see what is coming.”
Both Kolga and McQuinn agree there is “no doubt” that official U.S. government channels, including President Donald Trump himself, represent a growing source of manipulated content targeting Canada.
“The trajectory is rather clear,” Kolga said. “I think that we need to anticipate that that’s going to happen. Reacting to it after it happens isn’t all that helpful—we need to be preparing at this time.”
U.S. Emerging as Significant Threat Source
While Morrison noted that last year’s federal election didn’t experience significant AI interference, he warned that “our adversaries in this space are continually evolving their tactics, so it’s only a matter of time, and we do need to be very vigilant.”
The Communications Security Establishment and Canadian Centre for Cyber Security have issued similar warnings about hostile foreign actors increasingly using AI against “voters, politicians, public figures, and electoral institutions” over the next two years.
McQuinn pointed out that disinformation targeting Canadians primarily spreads through American-owned platforms like X, Facebook, and now U.S.-owned TikTok, creating regulatory challenges. Attempts by European and British governments to regulate content on these platforms have faced resistance from both the companies and the Trump administration.
What distinguishes the current environment, researchers say, is the direct involvement of the Trump administration in spreading misinformation, including AI deepfakes. Examples range from clearly artificial content—like Trump sharing images of himself with a penguin in Greenland—to more subtle manipulations, such as the White House allegedly altering a photo of an immigration protester to make her appear tearful.
“The present U.S. administration is the only western country that we know of that on a regular basis is publishing or sharing or promoting obvious fakes and deepfakes, at a level that has never been seen by a western government before,” McQuinn said, comparing the strategy to that employed by Russia, China, and groups like the Taliban.
McQuinn’s research suggests 83 percent of disinformation is shared by average Canadians who don’t immediately recognize fake content, often passing along materials that align with their worldview without thorough examination.
The increasing frequency of Trump sharing AI content imagining U.S. control of Canada, alongside support from U.S. administration figures for Alberta independence movements, has researchers particularly concerned about future scenarios.
“My real concern is that when Donald Trump does order the U.S. government to start supporting some of those narratives and starts actually engaging in state disinformation, in terms of Canada’s unity—when that happens, then we’re in real trouble,” Kolga warned.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


13 Comments
Deepfakes could have serious consequences for the mining and commodities sector if used to spread misinformation. We need robust verification systems to combat this threat.
Absolutely, the potential for malicious actors to spread false information about companies, projects, and market conditions is very concerning. Vigilance and transparency will be key.
This issue highlights the need for greater digital literacy and critical thinking skills. Consumers of information, whether in the mining sector or elsewhere, must learn to identify and reject fake content.
Deepfakes pose a serious threat to the credibility of information, which is essential for efficient and well-functioning commodity markets. This is a challenge the whole economy must grapple with.
Hmm, it’s quite alarming to see how quickly AI-generated content can blur the lines between truth and fiction. Protecting the integrity of elections and public discourse is crucial.
I agree, this is a major challenge that governments and tech companies need to work on urgently. Maintaining trust in information sources is vital for a healthy democracy.
This is a serious concern. The ability to create convincing deepfakes is a threat to democracy and trust in information. We need better safeguards and education to combat this.
The threat of deepfakes is not just limited to politics – it could also impact commodity markets and even safety in the mining industry if critical information is falsified. Regulators must act.
The mining and energy sectors are particularly vulnerable to the impacts of AI-generated disinformation. Developing robust authentication methods will be crucial to maintaining public trust.
As an engineer working in the mining industry, I’m deeply worried about the potential for deepfakes to compromise safety-critical information. We must find ways to ensure the integrity of technical data.
As an investor in mining and energy stocks, I’m worried about the impact that AI-generated deepfakes could have on market information and decision-making. We need to stay vigilant.
This is a complex issue with no easy solutions. Balancing free speech and preventing the spread of disinformation will require innovative approaches and collaboration across sectors.
Fascinating and concerning developments. I wonder how the mining industry and commodity traders will adapt to combat the spread of AI-generated misinformation. Transparency and verification will be key.