Listen to the article
Flagging COVID-19 Misinformation on Twitter Proven Effective, UAH Study Finds
Social media users are significantly less likely to trust tweets containing coronavirus misinformation when they’re flagged as potentially false, according to groundbreaking research from The University of Alabama in Huntsville (UAH).
The study, published in a peer-reviewed journal, was conducted by three UAH professors: Dr. Candice Lanius, assistant professor of communication arts; Dr. William “Ivey” MacKenzie, associate professor of management; and Dr. Ryan Weber, associate professor of English.
“America is dealing both with a pandemic and an infodemic,” explained Dr. Lanius, referencing a term introduced in a 2020 joint statement by the World Health Organization, United Nations, and other global health organizations. “The infodemic draws attention to our unique contemporary circumstances, where there is a glut of information flowing through social media and traditional news media.”
The researchers found that warning flags on tweets significantly reduced users’ perception of credibility. When tweets were marked as potentially coming from automated bot accounts or containing misinformation, survey respondents rated them as less credible, useful, accurate, relevant and interesting.
“Often, attempts to correct people’s misperceptions actually cause them to dig in deeper to their false beliefs, a process that psychological researchers call ‘the backfire effect,'” said Dr. Weber. “But in this study, to our pleasant surprise, we found that flags worked.”
The study employed an innovative branching survey methodology. Researchers first asked participants about their views on COVID-19 case reporting—whether they believed numbers were being underreported, overreported, accurately reported, or if they had no opinion. Based on these responses, participants were then shown tweets that aligned with their existing beliefs.
This approach allowed researchers to examine whether people would be skeptical of misinformation even when it confirmed their preexisting views. The results suggest that warning labels can be effective even in these challenging circumstances.
“We were interested to see how people would respond to bots and flags that echoed their own views,” Dr. MacKenzie noted. The automated survey design ensured participants saw content aligned with their stated beliefs about COVID-19 reporting.
The research identified several concerning patterns in vulnerability to misinformation. Those who consume more news media, particularly right-leaning outlets, appeared more susceptible to COVID-19 misinformation. The researchers offered several possible explanations for this finding.
First, media that relies heavily on ideological and emotional appeals may condition viewers to process information peripherally, making decisions based on cues other than the strength of arguments. Second, as scientific guidance evolved throughout the pandemic, some viewers perceived this as inconsistency, contrasting it with the more static messaging from certain media outlets.
Geography also emerged as a factor in COVID-19 skepticism. According to data from the American Communities Project, many consumers of right-leaning news media live in rural areas, which initially experienced fewer direct impacts from the pandemic compared to urban centers during the early 2020 outbreak.
The study found that flags indicating a tweet came from a suspected bot account and contained misinformation made users less willing to engage with the content by liking or sharing it. However, not all respondents responded equally to these warning flags.
“Some people showed more immunity to the flags than others,” Dr. Weber noted. “For instance, Fox News viewers and those who spent more time on social media were less affected by the flags than others.” The flags also proved less effective at changing participants’ overall views about COVID-19 case numbers, though some participants—particularly those who initially believed case numbers were being overcounted—did reconsider their positions.
Dr. MacKenzie emphasized the positive implications of the research: “As a whole, our research would suggest that individuals want to consume social media that is factual, and if mechanisms are in place to allow them to disregard false information, they will ignore it. I think the most important takeaway from this research is that identifying misinformation and bot accounts will change social media users’ behaviors.”
The findings arrive at a critical time as social media platforms continue to experiment with various approaches to combat misinformation during global health crises. The research suggests that clear warning labels represent a promising strategy in the ongoing battle against the “infodemic” that has accompanied the COVID-19 pandemic.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools
16 Comments
This research highlights the importance of critical thinking and media literacy education. Equipping users with the skills to evaluate online content can complement platform-level interventions.
Absolutely. Empowering individuals to think critically about the information they consume is a crucial piece of the puzzle in addressing the infodemic.
This research is important for combating COVID-19 misinformation on social media. Flagging tweets as potentially false can help reduce the spread of harmful falsehoods.
Agreed. Identifying and labeling misinformation is a crucial step in protecting public health during the pandemic.
I’m curious to know more about the specific methods and sample size of this study. The results seem promising, but I’d like to understand the research design in more depth.
That’s a fair point. The article could have provided more details on the study’s methodology and limitations. Transparency is important for evaluating the strength of the findings.
It’s good to see empirical evidence that these content moderation efforts are effective. Social media platforms should continue implementing strategies to curb the infodemic.
Absolutely. Misinformation can have serious consequences, so these findings highlight the value of proactive fact-checking and labeling.
It’s encouraging to see social media platforms taking steps to combat the spread of misinformation. However, the long-term success of these efforts will depend on continued vigilance and innovation.
Exactly. Misinformation is an evolving challenge, so platforms must remain proactive and adaptable in their response strategies.
While flagging misinformation is helpful, it’s also important to consider the broader societal factors that contribute to the spread of false narratives. This is a complex challenge requiring a multifaceted approach.
Agreed. Tackling misinformation requires addressing the underlying issues that make people susceptible to believing and sharing false information in the first place.
This research provides valuable insights, but I wonder how the findings might translate to other types of misinformation beyond COVID-19. The principles could potentially apply more broadly.
That’s a good point. If the labeling approach proves effective for COVID-19 misinformation, it could potentially be extended to address other forms of online falsehoods as well.
This is an important step, but more work is needed to address the root causes of COVID-19 misinformation. Improving digital literacy and source evaluation skills should also be a priority.
Excellent observation. Empowering users to critically assess online information is crucial, alongside content moderation efforts by platforms.