Listen to the article
Canadian Researchers Develop AI Tool to Combat Online Disinformation
Researchers at the Canadian Institute for Advanced Research have developed an enhanced artificial intelligence system designed to combat the growing threat of online disinformation targeting Canadians. The tool, called CIPHER, now employs AI technology to more effectively identify and debunk false and misleading claims circulating online.
Brian McQuinn, an associate professor at the University of Regina and one of the project’s leads, explained that the technology currently focuses on analyzing Russian disinformation campaigns but will soon expand to include content in Chinese languages, with potential to monitor misleading information originating from the United States as well.
“Russia was the main threat targeting Canada most generally,” McQuinn said in a recent interview. “We are now beginning to shift.”
The system works by scanning foreign media sites for dubious claims, which are then verified by human fact-checkers. McQuinn cited a recent example where CIPHER identified a Russian media outlet falsely reporting that Alberta is moving toward independence – a claim that distorts reality by taking kernels of truth about separatist activities and misrepresenting them as an official independence process.
CIPHER was launched three years ago following research that uncovered pro-Kremlin social media accounts targeting both far-right and far-left groups in Canada with false narratives about the war in Ukraine. These included baseless claims that Russia invaded to eliminate a neo-Nazi regime and that Ukraine had sought nuclear weapons.
“Effective disinformation often has kernels of truth in it,” McQuinn noted.
According to McQuinn, the primary goal of disinformation campaigns is to fragment societies and potentially incite violence. These campaigns become particularly effective when ordinary people share misleading content with their social networks.
“It is essential for China and for Russia, especially, to show that it looks like the Western project is decaying, is falling apart economically, politically, socially,” McQuinn explained.
The researcher also highlighted the increasing role of the United States as a source of disinformation affecting Canadians, noting that most Canadian social media discourse occurs on U.S. platforms. “We have seen that Canadian news and certain types of Canadian content are being downgraded and throttled within these algorithms,” he said.
While artificial intelligence has contributed significantly to the proliferation of disinformation on social media, McQuinn emphasized that CIPHER needed to harness the same technology to make debunking efforts more efficient. “We are in an AI arms race around disinformation,” he observed.
Currently, the tool is being utilized by DisinfoWatch, an organization dedicated to exposing falsehoods to Canadians. Marcus Kolga, DisinfoWatch’s founder, has called for stronger legislation and regulations on digital media platforms to prevent the spread of misinformation.
“Us doing it alone is not sufficient enough. It requires technology and for us to harness existing technologies in order to sort of make up that gap that we have,” Kolga said.
McQuinn revealed that discussions with government agencies about implementing CIPHER are underway, though he declined to provide specific details. The Canadian Institute for Advanced Research has received funding support from both federal and Alberta governments.
For everyday Canadians navigating social media, McQuinn offered simple advice to help stem the tide of disinformation: pause before sharing content online. “If I’m going to forward something, what am I forwarding?” he said. “The research has shown if you just take like an extra 10 seconds, the amount of disinformation that gets transferred is significantly less.”
As disinformation techniques continue to evolve with technology, tools like CIPHER represent important countermeasures in preserving information integrity and protecting democratic discourse from foreign interference.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


18 Comments
This AI-powered system to combat online disinformation is an intriguing development. I’m curious to see how it performs in real-world testing and whether it can be effectively scaled up to address the growing threat of misinformation across the internet.
Good point. Scaling up the tool’s capabilities and ensuring its reliability will be critical for it to have a meaningful impact on the broader disinformation landscape.
Combating online disinformation is a critical challenge, and I’m glad to see Canadian researchers taking a proactive approach with this AI-powered tool. I hope it can be a model for other countries to follow in protecting their citizens from the spread of false information.
Absolutely. If successful, this Canadian initiative could pave the way for more global collaboration in the fight against disinformation.
While the idea of using AI to detect and debunk online disinformation is promising, I’m curious about the ethical considerations around this technology. Ensuring transparency and accountability will be crucial to maintain public trust.
That’s a valid point. The developers will need to address ethical concerns and maintain transparency to build confidence in the tool’s use.
This is an important initiative, but I wonder about the potential for bias or errors in the AI system’s identification of misinformation. Continuous monitoring and refinement will be key to maintaining the tool’s effectiveness.
Good point. Vigilance and ongoing improvement of the AI system will be necessary to address any biases or errors that may arise.
While an AI-powered system to combat online disinformation is an interesting idea, I have some concerns about relying too heavily on automation. Fact-checking by human experts will still be crucial to ensure accuracy and avoid unintended consequences.
That’s a valid concern. The human element in verifying claims and debunking false information should not be overlooked, even with advanced AI capabilities.
Kudos to the Canadian researchers for developing this advanced AI system. Online disinformation is a growing threat, so having tools to quickly identify and correct false claims is essential. I hope this technology can be shared globally to combat the spread of misinformation.
I agree. Collaborative efforts to tackle disinformation across borders will be key. This Canadian initiative could serve as a model for other countries to follow.
It’s great to see Canadian researchers taking on the challenge of online disinformation. Focusing on foreign-sourced misinformation campaigns, like those from Russia, is a smart starting point. But I agree that expanding the tool’s scope to include other sources will be important.
Absolutely. Addressing disinformation from a variety of sources, both domestic and international, will be key for the long-term effectiveness of this tool.
It’s good to see the researchers focusing on debunking Russian disinformation campaigns targeting Canada. But I’m also curious about their plans to monitor misleading information from other sources, like China and the US. A comprehensive approach will be needed.
That’s a fair point. Broadening the tool’s scope to cover a wider range of potential disinformation sources is an important next step.
This AI tool to combat online disinformation sounds promising. Identifying and debunking false claims, especially from foreign sources, could help protect Canadians from misinformation. I’m curious to see how it expands to cover more languages and regions.
You raise a good point. Expanding the tool’s capabilities to monitor a wider range of disinformation sources will be crucial for its effectiveness.