Listen to the article
A new artificial intelligence tool developed in Canada could provide Europe with a powerful weapon against Russian disinformation campaigns. The AI agent, named Cipher, has successfully demonstrated its ability to detect and analyze Russian disinformation targeting Canadian networks across the political spectrum.
Having completed its testing phase in English, the system’s developers are now training it to recognize similar narratives in Russian language content. Researchers aim to deploy the technology in Europe, particularly in countries at the frontlines of combating Russian operations designed to undermine Western democracies.
“What would usually take, for an analysis piece, maybe half a day to a day, was really being crunched down to a few hours, and was scarily accurate,” said Marcus Kolga, who runs the DisinfoWatch platform in Canada and was an early tester of the technology.
Cipher operates as a “human-in-the-loop” AI system, requiring user oversight while automating the detection process. This approach enables monitoring efforts to scale up significantly, helping analysts manage the overwhelming volume of disinformation produced by Russian operations.
During testing, the tool uncovered persistent Russian efforts to sway Canadian public opinion on Ukraine, with the clear objective of eroding support for one of Kyiv’s strongest allies.
The project began three years ago under the AI safety research program at the Canadian Institute for Advanced Research (CIFAR). Brian McQuinn, an associate professor in international studies at the University of Regina, collaborated with Matthew Taylor, an associate professor in computing science at the University of Alberta, who led a five-person engineering team to develop the sophisticated software.
“We are able to track and show on a day-to-day basis where the Russian networks are investing their limited resources, what themes they are targeting day in, day out, and how they are changing over the weeks and over the months,” McQuinn explained. “It really shows you the extent to which they are responding to events almost in real time.”
The software goes beyond simply identifying disinformation sources—it maps their spread and analyzes trending themes, providing experts with detailed insights into how Russian operations evolve over time. This comprehensive approach allows researchers to understand tactical shifts in the information warfare landscape.
Russian disinformation typically follows a predictable path: state-controlled outlets like RT and Sputnik release deceptive narratives, which are then amplified by social media influencers before spreading across broader networks. Occasionally, these narratives penetrate mainstream news coverage, giving them additional legitimacy.
Kolga, who monitors various forms of foreign interference in Canada, notes that Russian campaigns demonstrate remarkable adaptability, exploiting geopolitical developments for maximum political impact. For example, when the U.S. attempted to capture Venezuelan leader Nicolas Maduro in January, Russian operations quickly pushed narratives portraying Western governments as imperialist regimes engaged in illegitimate regime changes.
Similarly, when former Canadian Deputy Prime Minister Chrystia Freeland announced her resignation to become President Volodymyr Zelensky’s economic advisor, Kolga observed a surge in targeted attacks aiming to discredit her by reviving claims that her Ukrainian grandfather collaborated with Nazis during World War II—allegations her office has previously denied.
After identifying these campaigns, Kolga compiles analysis reports that aim to “inoculate” audiences against disinformation by presenting facts and historical context that counter Russian narratives.
The Cipher team is now working to enhance the system’s predictive capabilities, potentially allowing experts to anticipate Russian disinformation strategies before they fully develop. This “pre-bunking” approach could significantly limit the spread and impact of false narratives.
“You can actually start to predict the narratives that are going to come, and therefore you’re pre-bunking them, and in many ways that’s the most effective strategy,” McQuinn explained.
The researchers hope to pilot Cipher in the Baltic states and eventually Ukraine, supporting local efforts to combat disinformation. These regions face particularly intense Russian information operations due to their strategic importance and historical ties to Russia.
“For us to have this sort of a tool to help us defend against it is a small step, but it’s an important step in eventually building some resilience and inoculating Canadians against these information operations,” said Kolga, highlighting the technology’s potential long-term impact on democratic resilience.
As disinformation continues to present a significant threat to democratic processes worldwide, technologies like Cipher represent an important advancement in the digital defense toolkit for Western democracies facing sophisticated influence operations from hostile actors.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


18 Comments
While the potential of this Canadian AI tool is promising, I’m concerned about the privacy implications of using such powerful technology to monitor online content. Proper safeguards must be in place.
Valid concerns. The developers will need to ensure robust data protection and transparency measures to build public trust. Balancing security needs and civil liberties is crucial in this domain.
Impressive to see Canadian AI software taking on Russian disinformation campaigns. This could be a powerful tool to help counter the flood of false narratives targeting democracies in Europe.
Agreed, the ability to quickly detect and analyze disinformation at scale is critical. Curious to see how this technology performs on Russian-language content as well.
I’m curious to see how this AI system performs in real-world conditions. Detecting and analyzing disinformation is one thing, but actually countering its effects on the public is another challenge.
That’s a good point. Deploying this technology is just the first step. Effectively neutralizing the impact of Russian propaganda will require a comprehensive, multi-faceted approach involving both technological and societal solutions.
While the speed and accuracy of this AI system is impressive, I’m concerned about the potential for false positives or missed content. Rigorous testing will be crucial before deployment.
That’s a valid point. Safeguards and ongoing monitoring will be essential to ensure the system’s reliability and prevent unintended consequences. The human oversight component is key in that regard.
This is a welcome development in the ongoing battle against state-sponsored propaganda. Kudos to the Canadian researchers for creating a tool that can help protect democratic institutions in Europe.
Indeed, innovative solutions like this are desperately needed. Let’s hope this technology can make a real difference in combating the flood of Russian disinformation targeting vulnerable populations.
It’s encouraging to see Canada taking a proactive stance against Russian disinformation. Exporting this AI tool to help protect European democracies is a smart move.
Absolutely. Sharing this technology internationally can strengthen the global response to malicious information campaigns. Curious to see the results when deployed in real-world settings.
I’m skeptical about how effective this AI system will be against the sophisticated disinformation tactics used by Russia. Automating detection is a start, but human analysts will still be crucial.
That’s a fair point. Even with advanced AI, disinformation evolves quickly, so a human-in-the-loop approach is important to maintain vigilance and adapt to new tactics.
Exporting Canadian-developed AI to help Europe counter Russian disinformation is a smart geopolitical move. This could strengthen international cooperation in the fight against information warfare.
Absolutely. Sharing this technology across borders can create a more unified front against state-sponsored propaganda campaigns. Coordinating efforts between Canada and Europe is a wise strategy.
Disinformation is a serious threat to democracy, so I’m glad to see innovative solutions being developed to combat it. Automating the detection process while maintaining human oversight seems like a smart approach.
Yes, the ‘human-in-the-loop’ model allows analysts to focus on the most critical content rather than getting overwhelmed. This could be a game-changer in the fight against state-sponsored propaganda.