Listen to the article
Navigating Misinformation in Election Season: How to Spot False Claims and Protect Democratic Discourse
With Canada potentially heading to the polls soon and U.S. midterms set for November 2026, citizens face an increasingly polluted information landscape. From sophisticated AI-generated videos to old-fashioned false claims, misinformation threatens public understanding of critical issues and could undermine trust in democratic institutions at a pivotal time.
The problem doesn’t require advanced technology to cause harm. In Vancouver recently, a city councillor shared allegations that other officials had used or distributed drugs—claims that were later traced back to the mayor, who subsequently apologized. This incident demonstrates how quickly misinformation can influence public debate regardless of technological sophistication.
Wes Regan, a PhD candidate researching at the University of British Columbia’s school of community and regional planning, studies how polarization and emotion affect public decision-making. His work specifically examines why misinformation spreads in urban planning and policy contexts, and explores methods to rebuild trust in democratic processes.
According to Regan, there’s an important distinction between misinformation and disinformation. The former is often shared with good intentions by people who believe they’re being helpful, while disinformation is deliberately spread to create division, undermine institutional trust, or interfere with democratic processes like elections.
“Digital platforms, particularly social media algorithms, tend to reward sensational and controversial content,” Regan explains. This creates an environment where inflammatory content thrives. Addressing this pollution requires coordinated action from governments, corporations, and individuals.
The convincing nature of misinformation often stems from its emotional appeal. False information typically validates existing suspicions and biases, making it satisfying to both consume and share. It commonly contains elements of plausibility—legitimate concerns about corporate influence, for instance, can be weaponized to cast doubt on scientific consensus around vaccines or climate change.
As Canada approaches a potential election and the U.S. prepares for midterms, AI-generated political content presents new challenges. While AI may package misinformation more convincingly, Regan notes the fundamental dynamic remains unchanged—people are most susceptible to claims they’re already inclined to believe.
The more significant threat may be impersonation: AI technologies mimicking politicians or election officials to provide incorrect voting information. Though limited cases have already occurred, AI could make these attempts increasingly persuasive and difficult to detect.
However, Regan identifies a deeper cultural risk: “If we begin consulting an algorithmic oracle instead of engaging with one another, we sidestep the harder work of democracy—engaging across differences, negotiating, deciding for ourselves.” This technological dependency could further weaken social cohesion and civic engagement.
For individuals encountering questionable information, Regan recommends trusting intuition when something feels suspicious. Claims that seem sensational, too convenient, or portray opponents as one-dimensional villains warrant skepticism. He advises checking sources and remembering that traditional media, despite its limitations, operates under stronger regulatory frameworks than online influencers.
When confronting misinformation spreading within communities, context determines the appropriate response. Public correction works in some situations, while private conversation may be more effective in others. Research by MIT political scientist Adam Berinsky shows that corrections are often most effective when delivered by an “unlikely source”—someone perceived as less partisan or whose values align with those of the person exposed to misinformation.
As election campaigns intensify in both Canada and the United States, the ability to navigate this complex information landscape becomes increasingly crucial for maintaining healthy democratic discourse and ensuring that citizens can make informed electoral choices based on facts rather than manipulation.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


6 Comments
This is a timely and important topic. As we approach election seasons, the battle against misinformation will only intensify. I appreciate the focus on media literacy and verifying claims – those seem like key strategies to empower citizens to be discerning consumers of information.
The spread of misinformation is a complex issue with no easy solutions. It’s concerning to see how quickly erroneous claims can influence public discourse, even at high levels of government. Fostering greater transparency and trust in democratic institutions will be crucial going forward.
Agreed. Rebuilding trust and promoting fact-based decision-making is vital. I’m interested to learn more about the research on the emotional and psychological factors that drive the spread of misinformation.
The role of emotion in the spread of misinformation is fascinating. It speaks to how we often prioritize narratives that align with our preconceptions or biases, rather than seeking objective truth. Developing more robust critical thinking skills will be crucial to navigating this challenge.
Misinformation is a troubling phenomenon with real-world consequences. I’m glad to see research being done on the dynamics behind it and potential solutions. Fact-checking, media literacy, and rebuilding trust in institutions seem like vital components of the response.
Misinformation is a tricky challenge, as it often taps into our emotions and biases. It’s important to approach news and information with a critical eye and verify claims before sharing. Fact-checking and promoting media literacy are key to combating the spread of false narratives.