Listen to the article
The rapid spread of disinformation online has prompted Canadian researchers to deploy artificial intelligence in their fight to protect democratic discourse. According to the Canadian Institute for Advanced Research, AI technology has significantly enhanced the capabilities of their debunking tool, CIPHER, enabling more effective responses to the constant flow of false and misleading claims targeting Canadians.
Brian McQuinn, an associate professor at the University of Regina and one of the project’s lead researchers, explained that while CIPHER currently focuses on analyzing Russian disinformation campaigns, it will soon expand to include Chinese sources. The tool may eventually examine content originating from the United States as well.
“Russia was the main threat targeting Canada most generally,” McQuinn said. “We are now beginning to shift.”
The system works by scanning foreign media sites for dubious claims, which are then verified by human fact-checkers. McQuinn cited a recent example where CIPHER flagged a Russian media outlet for falsely reporting that Alberta is moving toward independence. While separatist movements exist in the province and have reportedly engaged with U.S. officials, no formal separation process is underway.
“Effective disinformation often has kernels of truth in it,” McQuinn noted, highlighting the sophisticated nature of modern propaganda techniques.
CIPHER was launched three years ago following research by McQuinn and his colleagues that uncovered pro-Kremlin social media accounts targeting both far-right and far-left groups in Canada with false narratives about the war in Ukraine. These included unfounded claims that Russia invaded to eliminate a neo-Nazi regime and that Ukraine had pursued nuclear weapons.
According to McQuinn, the overarching objective of these disinformation campaigns is to fracture social cohesion and potentially incite violence. The campaigns become particularly effective when ordinary citizens share misleading content with their personal networks.
“It is essential for China and for Russia, especially, to show that it looks like the Western project is decaying, is falling apart economically, politically, socially,” he explained.
McQuinn also pointed to a concerning trend: the United States is increasingly becoming a primary source of disinformation affecting Canadian audiences. “You have to always remember that most of Canada’s dialogue when it comes to social media is on U.S. platforms,” he said. “We have seen that Canadian news and certain types of Canadian content are being downgraded and throttled within these algorithms.”
While artificial intelligence has contributed to the proliferation of disinformation across social media platforms, the CIPHER project demonstrates how the same technology can be harnessed to combat false narratives. “We are in an AI arms race around disinformation,” McQuinn said.
The researchers aim to eventually transfer CIPHER’s capabilities to government agencies or non-profit organizations. Currently, the tool is being utilized by DisinfoWatch, an organization dedicated to exposing falsehoods to Canadian audiences.
Marcus Kolga, founder of DisinfoWatch, has called for stronger legislation and regulations governing digital media platforms to prevent the spread of misinformation. “Us doing it alone is not sufficient enough. It requires technology and for us to harness existing technologies in order to sort of make up that gap that we have,” Kolga stated.
McQuinn confirmed discussions with government agencies about potential adoption of CIPHER but declined to provide specific details. The Canadian Institute for Advanced Research has received financial support from both federal and Alberta governments for its initiatives.
For individual Canadians navigating the complex information landscape, McQuinn offered practical advice: take a moment to evaluate content before sharing it on social media. “If I’m going to forward something, what am I forwarding?” he said. “The research has shown if you just take like an extra 10 seconds, the amount of disinformation that gets transferred is significantly less.”
As digital information continues to shape public opinion and political discourse, tools like CIPHER represent an important countermeasure in preserving information integrity and democratic values in an increasingly complex media environment.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
The example of CIPHER flagging the false claim about Alberta’s independence movement shows the potential of this technology. Identifying and debunking misleading claims in real-time could go a long way in maintaining public trust.
Expanding CIPHER to include Chinese and US-based sources is a smart move, as disinformation can come from a variety of foreign actors. Fact-checking claims through human verification is also crucial.
Agreed, a multi-pronged approach targeting different sources is key. The human oversight element is especially important to ensure the AI doesn’t make mistakes.
This is an important development in the fight against online disinformation. The use of AI to enhance the capabilities of debunking tools like CIPHER is a smart approach, and I’m hopeful it will help protect democratic discourse in Canada.
This is a fascinating development in the fight against online disinformation. AI tools like CIPHER could be a powerful weapon in protecting democratic discourse and ensuring the public has access to accurate information.
While it’s concerning to hear about Russian disinformation campaigns targeting Canada, I’m glad the researchers are being proactive in developing tools like CIPHER to combat this threat. Transparency and accountability are so important online.
As someone who closely follows mining and commodities news, I’m curious to see if CIPHER could eventually be expanded to fact-check claims and narratives in that space. Disinformation can have real-world impacts on markets and investments.