Listen to the article
The Viral Nature of Misinformation Revealed in Toronto Attack Coverage
In the weeks following the 2016 U.S. presidential election, Facebook CEO Mark Zuckerberg declared that his company takes misinformation seriously. Since then, combating “fake news” has become a critical concern for both technology companies and governments worldwide. Yet despite widespread recognition of the problem, opportunities to observe misinformation dynamics in real-time remain surprisingly rare.
A unique case emerged following the 2018 Toronto van attack when CBC journalist Natasha Fatah inadvertently created a natural experiment in misinformation spread. Fatah posted two competing eyewitness accounts on Twitter: one incorrectly describing the attacker as “angry” and “Middle Eastern,” and another accurately identifying him as “white.”
The results were revealing. The tweet containing the incorrect information about the attacker’s ethnicity generated substantially more engagement than the accurate one. Within approximately five hours after the attack, the misinformation tweet had spread significantly faster and wider than its accurate counterpart. Even more troubling, this pattern persisted over the next 24 hours, with the accurate information never catching up in terms of engagement.
This stark contrast raises important questions about why misinformation travels so quickly through social networks and what can be done to address the problem.
The mechanics behind this phenomenon stem from both human psychology and technical factors. Twitter’s algorithm, like those of Facebook and YouTube, prioritizes content that generates high engagement. The company’s engineering team has revealed that its deep learning algorithm promotes content that has already received significant interaction through retweets and mentions.
Human cognitive biases fuel the initial stage of this process. People are naturally drawn to content that confirms existing beliefs or taps into emotional triggers. Once a provocative or inflammatory tweet receives those initial interactions, the algorithm amplifies it, showing it to more users who then engage further. This creates a self-reinforcing cycle that can transform social media into what some critics call a “confirmation bias machine.”
This pattern played out clearly with Fatah’s tweets. A subset of her followers immediately engaged with the inflammatory, though incorrect, information about the attacker being “Middle Eastern.” This triggered the algorithm to show that tweet to more users, creating exponential growth in engagement. Meanwhile, the accurate tweet received minimal initial engagement and thus remained relatively invisible in the platform’s ecosystem.
The persistence of misinformation on Twitter extends beyond just the initial event. Ten days after the Toronto attack, searches for the attacker’s name still surfaced tweets falsely claiming he was Muslim, potentially leading users to wrongly associate the attack with Islamic terrorism rather than its actual motivation.
Addressing these challenges requires both technical and human solutions. Twitter could implement several measures to curb misinformation spread during crisis events. The platform could prioritize official police or government accounts during emergencies, display warnings about the unreliability of early eyewitness accounts, or restrict visibility of trending misinformation once authorities have provided accurate information.
The company could also improve its search functionality to prevent misinformation from dominating results long after an event. Options include establishing an editorial team to monitor and remove false information from trending topics or creating a user reporting system specifically for misinformation.
However, technological solutions alone cannot solve the problem. Users, especially journalists, must develop greater awareness of how their actions can inadvertently spread misinformation. In Fatah’s case, she shared an unverified account without proper corroboration—even though the eyewitness himself had expressed uncertainty about his own observation.
This incident provides valuable insight into how misinformation operates in our digital ecosystem. While we can reasonably expect platform algorithms not to amplify our worst instincts, the responsibility also falls on users to exercise greater care in what they share and how they consume information online.
As social media continues to serve as a primary news source for many people, understanding the mechanics of misinformation spread becomes increasingly vital for maintaining an informed public discourse during critical events.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


17 Comments
Combating misinformation on social media is an ongoing challenge. Proper fact-checking and moderation are critical to limiting the spread of false narratives, especially during breaking news events.
You raise a good point. Rapid dissemination of unverified information can have serious consequences, so platforms need robust systems to detect and curb misinformation.
While the motivation behind the spread of misinformation may vary, the consequences can be severe. Rigorous fact-checking and public awareness campaigns are crucial to curb the proliferation of false narratives.
Absolutely. Transparency and accountability from both social media companies and individual users are key to stemming the tide of misinformation.
This case study is a sobering reminder of the power of misinformation to rapidly propagate, even in the face of accurate information. Addressing this challenge will require a sustained, multi-stakeholder effort.
This case study highlights the troubling tendency for sensational but inaccurate information to gain more traction online. Addressing the root causes of this dynamic is essential for improving the quality of public discourse.
Agreed. Algorithms that prioritize engagement over accuracy contribute to this problem. Social media platforms must rethink their incentive structures to better promote factual reporting.
The rapid spread of misinformation, even in the face of contradictory facts, is a worrying trend. Combating this will require a holistic approach that addresses the technical, behavioral, and societal factors at play.
Misinformation dynamics like the ones described here threaten to undermine public trust and informed decision-making. Concerted action is needed to restore the integrity of online discourse.
Agreed. Enhancing digital literacy and empowering users to critically evaluate content should be a priority for policymakers and technology companies alike.
This case study highlights the need for social media platforms to prioritize accuracy and accountability over engagement-driven algorithms. Striking the right balance between free speech and misinformation control will be crucial.
You make a fair point. Striking the right balance is indeed challenging, but the integrity of public discourse must be the overriding priority.
The tendency for sensational but inaccurate information to gain traction online is a concerning phenomenon. Developing effective countermeasures will require sustained collaboration between tech companies, policymakers, and civil society.
The spread of misinformation is a complex challenge with no easy solutions. However, strengthening media literacy, enforcing platform accountability, and empowering users to critically assess content are all important steps.
Well said. A multi-faceted approach involving both technological and educational measures will be necessary to combat the scourge of online misinformation.
This case study underscores the urgency of addressing the systemic drivers of misinformation spread. Strengthening media literacy, platform accountability, and user empowerment are all crucial components of the solution.
Absolutely. A comprehensive, multi-faceted approach is needed to combat the complex challenge of online misinformation.