Listen to the article
Capitol Riot Exposes Growing Threat of Deepfakes and AI-Driven Disinformation
The unprecedented mob assault on the U.S. Capitol on January 6 represents perhaps the most stunning collision yet between online disinformation and real-world violence, raising urgent questions about the future of truth in democratic societies.
Supporters of then-President Donald Trump who stormed Congress did so believing the U.S. election had been stolen from them. For weeks, they had consumed unproven narratives about “ballot dumps,” manipulated voting machines, and alleged Democratic corruption in major cities. Some rioters, including the woman who was fatally shot, were motivated by the thoroughly discredited QAnon conspiracy theory that portrays Democratic Party leaders as a pedophile ring and Trump as their nemesis.
While the Trump presidency has ended, experts warn that disinformation’s corrosive effects on democracy may be far from reaching their peak. Technological developments and increasing social media polarization suggest more dangerous scenarios lie ahead.
The current landscape of disinformation could soon be dramatically amplified by increasingly sophisticated artificial intelligence. Consider if video footage from the Capitol riot had been manipulated to replace Trump supporters’ faces with known antifa activists. Such altered content would have bolstered false “false flag” narratives that attempted to blame left-wing groups for the violence.
The technology for such deception not only exists—it’s rapidly becoming more sophisticated and accessible.
Deepfake videos, which use AI to create synthetic media showing people saying or doing things they never said or did, have already begun migrating from their primary use in pornography into political contexts. A deepfake showing former President Barack Obama using an expletive to describe Trump has garnered over eight million views since 2018.
While early deepfakes contained noticeable flaws, technology has advanced remarkably. Many experts now believe the most sophisticated AI-generated videos will soon become impossible for humans to distinguish from authentic footage. Last year, a deepfake specialist using freely available software “de-aged” actors Robert DeNiro and Joe Pesci in “The Irishman,” with results many critics considered superior to the film’s professional visual effects.
“This is disastrous to any liberal democratic model because in a world where anything can be faked, everyone becomes a target,” warns Nina Schick, author of “Deepfakes — The Coming Infopocalypse.” “But even more than that, if anything can be faked… everything can also be denied. So the very basis of what is reality starts to become corroded.”
This erosion of shared reality was evident in the aftermath of the Capitol riot. When Trump released a video statement the following day, some supporters who felt betrayed dismissed it as a deepfake, illustrating how the mere existence of this technology can undermine authentic content.
Text-based disinformation also faces a significant transformation through programs like GPT-3, an artificial intelligence system capable of generating articles indistinguishable from human writing. Such technology could potentially flood social media with fabricated news stories at unprecedented scale, overwhelming fact-based reporting with synthetic misinformation.
While society has struggled with written fake news for years and photo manipulation has long been possible, AI-generated videos and synthetic articles represent a more profound threat to reality-based discourse by attacking our most basic assumptions about verifiable information.
There is no simple solution to this growing crisis. Social media platforms are developing their own AI detection tools to identify fake content, though bad actors continually adapt to evade these safeguards. Stricter platform policies and faster removal of suspicious content may help limit the spread of synthetic disinformation.
Governments worldwide face mounting pressure to regulate technology companies and promote digital literacy. Educational programs teaching citizens to critically evaluate online content will become increasingly vital as synthetic media becomes more prevalent.
The power of fake news ultimately depends on those who believe and spread it. As the Capitol riot demonstrated, the consequences of mass delusion fueled by disinformation are no longer theoretical but pose tangible threats to democratic institutions and public safety.
Without coordinated efforts from technology companies, governments, and civil society, the gap between objective reality and manufactured falsehoods threatens to widen further, potentially making January 6 a preview rather than a culmination of disinformation’s destructive potential.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


16 Comments
Preserving democratic institutions in the face of disinformation will require a delicate balance between protecting free speech and curbing the spread of falsehoods. Finding the right policy solutions won’t be easy, but it’s essential for the health of our democracies.
I agree, it’s a challenging issue with no easy answers. But the stakes are too high to ignore. We need to have tough, nuanced conversations about the appropriate role of government, tech platforms, and civil society in addressing this threat.
While the focus of this article is on the political implications of disinformation, I wonder if there are also economic and financial impacts that we need to be aware of, especially in sectors like mining and commodities.
That’s a great point. Disinformation could potentially disrupt supply chains, influence investment decisions, and even impact commodity prices in industries like mining. It’s an angle worth exploring further.
The Capitol riot was a stark reminder of how quickly online falsehoods can translate into real-world chaos. Restoring trust in democratic institutions will require a multi-pronged approach targeting the root causes of disinformation.
Agreed. Improving digital literacy, regulating social media platforms, and investing in quality journalism will all be crucial in the fight against disinformation.
This is a concerning trend. Disinformation can sow division and undermine the democratic process. It’s crucial that we find ways to tackle the spread of false narratives online, while still preserving free speech and open discourse.
You’re right, the growing threat of AI-driven deepfakes is particularly worrying. We need robust fact-checking and media literacy initiatives to help people navigate the information landscape more critically.
The threat of disinformation to democratic institutions is a global challenge that requires international cooperation and coordination. No single country or platform can solve this problem alone.
Absolutely. Developing shared standards, best practices, and cross-border enforcement mechanisms will be essential in combating the transnational nature of disinformation campaigns.
I’m curious to hear more about the specific technological developments that could amplify the current disinformation landscape. What emerging AI and social media trends are experts most concerned about?
That’s a great question. From what I’ve read, the rapid advancements in AI-powered deepfakes and the increasing polarization of social media echo chambers are two of the key trends that experts are closely monitoring.
While the Trump presidency may be over, the underlying issues that led to the spread of disinformation remain. This is a challenge that will require sustained, bipartisan efforts to address.
That’s a good point. Disinformation is a complex, multi-faceted problem that won’t be solved overnight. It will take time, resources, and a concerted, collaborative approach to make real progress.
This is a sobering reminder that the battle against disinformation is far from over. As technology continues to evolve, the need for robust, proactive measures to safeguard democratic institutions will only become more urgent.
Well said. Tackling disinformation will require a sustained, multifaceted effort from policymakers, tech companies, media organizations, and citizens alike. It’s a daunting challenge, but one that’s critical to get right.