Listen to the article

0:00
0:00

The truth appears to have a persuasive advantage over falsehoods, according to groundbreaking research that challenges conventional wisdom about how misinformation spreads in the digital age.

A comprehensive study published in the Journal of Personality and Social Psychology has found that truthful messages are not only more persuasive but also more likely to be shared than false information. The research, conducted by a team led by Nicolas Fay from the University of Western Australia, included four large-scale experiments with 4,607 participants ranging from 18 to 99 years old.

The findings come at a critical time when concerns about misinformation have reached unprecedented levels. In recent years, false information has been blamed for hampering climate action, complicating public health responses, and eroding trust in institutions. Previous research had suggested that false information spreads faster online, particularly on platforms like X (formerly Twitter), leading many experts to conclude that misinformation holds an inherent advantage in digital environments.

However, the new study suggests that platform design, algorithms, and automated accounts may be more responsible for the spread of falsehoods than human preference. By removing these technological influences from their experiments, the researchers were able to isolate how people naturally respond to true versus false content.

“Our findings suggest that people are predisposed to the truth – both as information producers and consumers,” Fay and his colleagues concluded in the paper. “This is consistent with the finding that the majority of online misinformation is spread by a small group of supersharers.”

The research methodology involved two distinct types of experiments. In the “persuasion game,” participants created messages designed to convince others of a claim. In the “attention game,” they crafted content intended to capture maximum attention. For each type of experiment, the researchers conducted two variations: one where humans wrote the messages and another where OpenAI’s GPT-3.5 generated the content.

Messages were classified as being based on information believed to be true, information believed to be false, or created without constraints. A separate large group of participants then evaluated all messages on truthfulness, persuasiveness, emotional tone, and likelihood of sharing.

The results were remarkably consistent across all experiments. Truth-based messages were rated as more persuasive, more interesting, and more likely to be shared both online and offline. False messages often backfired, causing participants to believe the claims less rather than more.

Interestingly, when participants were free to create persuasive messages without constraints, they naturally defaulted to truthfulness. These unconstrained messages were almost as truthful as those written with explicit instructions to be accurate. This tendency weakened slightly when participants focused on grabbing attention, but even then, their messages remained substantially more truthful than deliberately false ones.

The study also revealed that AI-generated content consistently outperformed human-written messages in terms of persuasiveness and shareability, particularly when the AI was instructed to produce truthful content. This finding carries significant implications for the growing use of artificial intelligence in content creation and communication strategies.

While the truth advantage was clear, the researchers discovered that truthfulness itself was not the primary motivator for sharing. Instead, people were more likely to share content that evoked positive emotions and encouraged social interaction, regardless of its truthfulness.

The research team acknowledged several limitations to their work. The controlled experimental environment may not fully capture the complexity of real-world information ecosystems. Additionally, the participant pool was predominantly from Western, educated backgrounds, and the study did not examine factors like message repetition, social network effects, or source credibility.

Despite these limitations, the findings provide a more optimistic view of human information processing than recent discourse might suggest. While misinformation remains a serious challenge, this research indicates that truth may have inherent advantages that can be leveraged in the ongoing battle against false information.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

26 Comments

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.