Listen to the article

0:00
0:00

Canadian researchers are turning to artificial intelligence to combat a growing wave of online disinformation targeting national unity and public perception, according to experts leading the effort.

The Canadian Institute for Advanced Research has enhanced its disinformation detection tool, CIPHER, with AI capabilities to keep pace with the increasing volume of false and misleading claims circulating online. The system, which initially focused on Russian disinformation campaigns, is now expanding to analyze Chinese-language content and potentially material originating from the United States.

“Russia was the main threat targeting Canada most generally,” said Brian McQuinn, an associate professor at the University of Regina and co-lead of the CIPHER project. “We are now beginning to shift.”

McQuinn cited a recent example where the system identified a Russian media outlet falsely reporting that Alberta is moving toward independence. While separatist movements exist in the province and have reportedly engaged with U.S. officials, no formal separation process is currently underway.

“Effective disinformation often has kernels of truth in it,” McQuinn explained.

CIPHER was launched three years ago following a report by McQuinn and colleagues that uncovered pro-Kremlin social media accounts targeting both far-right and far-left groups in Canada with false narratives about the war in Ukraine. These included unfounded claims that Russia invaded to eliminate a neo-Nazi regime and that Ukraine had pursued nuclear weapons.

According to McQuinn, the ultimate goal of these disinformation campaigns is to fracture social cohesion and potentially provoke violence. The campaigns become particularly effective when ordinary citizens share the content with their personal networks.

“It is essential for China and for Russia, especially, to show that it looks like the Western project is decaying, is falling apart economically, politically, socially,” he said.

The researchers have identified the United States as an increasingly significant source of disinformation affecting Canada, noting that most Canadian social media discourse takes place on American platforms.

“We have seen that Canadian news and certain types of Canadian content are being downgraded and throttled within these algorithms,” McQuinn observed.

While artificial intelligence has contributed to the proliferation of disinformation on social media, the CIPHER team recognized that the same technology was necessary to streamline their fact-checking efforts. “We are in an AI arms race around disinformation,” McQuinn acknowledged.

The researchers aim to make CIPHER available to government agencies and non-profit organizations. Currently, the tool is being utilized by DisinfoWatch, an organization dedicated to exposing falsehoods to the Canadian public.

Marcus Kolga, founder of DisinfoWatch, has called for stronger legislation and regulations on digital platforms to prevent the spread of misinformation through social media accounts.

“Us doing it alone is not sufficient enough. It requires technology and for us to harness existing technologies in order to sort of make up that gap that we have,” Kolga stated.

McQuinn revealed that discussions have taken place with government agencies regarding the potential adoption of CIPHER, though he declined to provide specific details. The Canadian Institute for Advanced Research has received funding support from both the federal government and the province of Alberta.

To combat the spread of disinformation, McQuinn advises Canadians to pause before sharing content on social media. “If I’m going to forward something, what am I forwarding?” he suggested. “The research has shown if you just take like an extra 10 seconds, the amount of disinformation that gets transferred is significantly less.”

As foreign influence operations continue to evolve, tools like CIPHER represent a critical counterbalance in preserving information integrity within Canada’s digital landscape. The intersection of human fact-checking expertise and advanced AI capabilities offers a promising approach to identifying and countering sophisticated disinformation campaigns targeting Canadian society.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

16 Comments

  1. James Q. Jones on

    It’s reassuring to see Canada taking proactive steps to address online disinformation. This AI tool could be a valuable asset in maintaining the integrity of public discourse.

    • Shifting focus from Russian to Chinese and US-based disinformation campaigns is a smart strategic move. Addressing the evolving landscape is crucial for effective intervention.

  2. Interesting that Canadian researchers are developing AI tools to combat online disinformation. Staying on top of evolving threats and adapting detection methods is crucial in this space.

    • Michael Jackson on

      Expanding the system to analyze Chinese and US-based content is a smart move as disinformation campaigns can originate from various sources.

  3. Effective disinformation often does have kernels of truth, which makes it more challenging to identify and debunk. This AI tool seems like a valuable asset to cut through the noise.

    • Shifting focus from Russian campaigns to other sources like China and the US is an important step. Disinformation is a global issue that requires vigilant, adaptive responses.

  4. Isabella Garcia on

    This AI tool sounds like an important development in the fight against online disinformation. Targeting false narratives from multiple sources is key to protecting public discourse.

    • Isabella S. Rodriguez on

      I’m curious to learn more about how the system identifies and analyzes disinformation, especially when it involves a kernel of truth. Transparency around the methodology could be valuable.

  5. Isabella Martin on

    Combating disinformation is a critical issue, and I’m glad to see Canadian researchers taking innovative approaches with AI. Expanding the tool’s capabilities is a wise move.

    • Staying adaptable to emerging threats from different regions is essential. This project seems well-positioned to help curb the spread of misleading claims online.

  6. The development of this AI-powered disinformation detection tool is an encouraging sign. Keeping pace with the increasing volume of false claims is a significant challenge, and this seems like a step in the right direction.

    • Elizabeth Smith on

      I’m curious to learn more about how the system identifies and analyzes disinformation that contains kernels of truth. Transparency around the methodology could help build trust in the tool’s effectiveness.

  7. Leveraging AI to combat online disinformation is a smart move. Kudos to the Canadian researchers for expanding the system’s capabilities to address threats from multiple sources.

    • Staying adaptable and vigilant is crucial in this space. Shifting focus to analyze Chinese and US-based content is an important strategic decision.

  8. Glad to see Canada taking proactive measures to combat online disinformation. Leveraging AI capabilities to keep pace with the evolving landscape is a smart approach.

    • Detecting false claims that leverage real events or movements is a tricky but crucial part of this work. Kudos to the researchers for expanding the system’s abilities.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.