Listen to the article
Russian disinformation bots targeted U.S. voters on Ukraine policy before the November midterm elections, deploying sophisticated campaigns to influence closely contested House and Senate races. This campaign represents just one example of a pervasive global issue with online misinformation.
With an estimated 5 billion people worldwide accessing the internet for information, entertainment, and social connection, the spread of false information has reached alarming levels. According to recent studies, approximately 40% of internet users admit they have unintentionally shared misinformation online.
Joe Carrigan, senior security engineer at the Johns Hopkins University Information Security Institute, finds this statistic unsurprising. As co-host of the popular Cyberwire podcast “Hacking Humans,” Carrigan specializes in analyzing social engineering scams and the automated systems behind them.
“Social media bots are automated programs that engage with users on social media by mimicking human users,” Carrigan explains. “Some are fully autonomous, but others are partially controlled by people.”
The prevalence of these bots varies considerably across platforms. While Twitter (now X) estimates range from 8% to 12% of accounts, with some outlier estimates as high as 80%, Facebook parent company Meta suggests approximately 5% of monthly active accounts are fake—a figure Carrigan believes may be conservative.
The security expert’s position is unequivocal: “People should not get their news from social media—period. They should totally discount any news delivered by that method. It’s a tough stance, but it’s the only way to be sure. View any news content on social media with skepticism, and hesitate before you share it.”
To navigate today’s complex information landscape, Carrigan recommends several practical strategies. First, acknowledging inherent media bias is essential. “A lot of outlets are owned and operated by either political parties, their allies, or foreign governments, and carry biases as a result,” he notes. Tools like AllSides can help categorize and understand these biases.
Carrigan also encourages internet users to establish a curated list of trusted media sources, preferably starting with centrist outlets. He advises distinguishing between news reporting and opinion pieces—a distinction that has blurred in recent years as “media outlets allow opinion to masquerade as news reporting.”
The security expert recommends avoiding news sources on the extreme ends of the political spectrum, naming Fox News, CNN, and MSNBC as examples of outlets AllSides Media categorizes as having strong partisan leanings. Cross-verification with unaffiliated sources is another key strategy.
For content verification, Carrigan suggests utilizing fact-checking resources like Snopes, PolitiFact, factcheck.org, and Leadstories.com. However, he cautions that even these platforms have their own biases. “I would never trust a big tech firm such as Google as an arbiter of truth. There are too many perverse incentives for any big tech firm here. It is best to look for a non-profit organization that does its best to be fair and non-partisan.”
The challenge has grown more complex with the rise of “deep fake” technology, which uses artificial intelligence to create or alter images and videos that appear authentic. “It’s more challenging than ever before to identify whether what you are seeing is real,” Carrigan warns. “Videos can be altered in a variety of ways, from being taken out of context to deceptive editing.”
To combat this threat, he recommends resources like The Washington Post’s fact-checker and the Poynter Institute. Technical tools can also help: Google’s image search function provides historical context for images, while specialized applications like InVid can help verify video authenticity.
Ultimately, Carrigan believes the traditional adage “Don’t believe anything you hear and only half of what you see” needs updating for the digital age. His modern version: “Don’t believe anything you read on social media, and only half of the videos are genuine, and even those are probably taken out of context to serve some political agenda.”
As misinformation continues to evolve in sophistication, developing critical media literacy skills has become essential for navigating today’s information ecosystem.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


6 Comments
The proliferation of social media bots is a serious threat to truth and transparency. We need stronger platform policies and user education to curb the spread of automated disinformation campaigns.
Fascinating insights into the scale of the misinformation problem. 40% of internet users unintentionally sharing false information is a staggering figure. We must all strive to be more discerning consumers of online content.
Social media bots can be a real menace, manipulating public discourse and sowing division. It’s crucial we learn to identify these automated accounts and not unwittingly amplify their false narratives.
Absolutely. Educating the public on recognizing bot activity is essential to combating the spread of online misinformation.
Worrying to see how foreign actors are exploiting social media to interfere in domestic politics. Robust fact-checking and media literacy efforts are vital to protect the integrity of our democratic processes.
Concerning to hear about the spread of misinformation online, especially when it targets important political issues. We should all be vigilant in verifying information sources and fact-checking before sharing content.