Listen to the article
Small digital speed bumps in the online world could help stem the tide of misinformation spreading across social media platforms, according to innovative new research from the University of Copenhagen.
In today’s social media landscape, sharing content has become almost frictionless. A quick tap on a “share” or “like” button instantly broadcasts content to hundreds or thousands of followers. This seamless design has revolutionized how information travels, but it comes with significant downsides—particularly when it comes to misinformation.
Research has consistently shown that false or misleading content tends to spread faster and more widely than factual information on social platforms. This phenomenon is partly driven by algorithmic systems that prioritize sensational, highly-shared posts to maximize user engagement.
The Danish researchers propose a surprisingly simple yet potentially effective solution: introduce small “digital frictions” into the sharing process. These minor obstacles could give users a moment to pause and reflect before sharing content that might be problematic.
“Our idea is to introduce a small pause in the sharing process to make people reflect on what they’re sharing before clicking the button,” explains lead author Laura Jahn, a Ph.D. student who conducted the study alongside Professor Vincent F. Hendricks.
The team developed a computer model simulating information spread on platforms like X (formerly Twitter), Bluesky, and Mastodon. Their findings, published in the journal npj Complexity, suggest that even minor impediments to sharing—such as a pop-up message—could significantly reduce impulsive content sharing.
However, the research revealed an important nuance. While friction alone reduced sharing volume, it didn’t necessarily improve the quality of what was being shared. To address this limitation, the researchers added a critical learning component to their model.
“It could be a pop-up with a short quiz asking questions like: How is misinformation defined, and what does this social media platform do to limit fake news?” Hendricks explains. “The idea is that this learning element will prompt users to reflect on their behavior and share fewer problematic posts.”
The results were encouraging. When friction was combined with learning elements, the average quality of shared posts increased significantly according to their model. This suggests that not only would less content be shared, but the material that does get distributed would be of higher quality and reliability.
The concept represents a middle ground in the ongoing debate about how to address misinformation. Rather than relying solely on heavy-handed content moderation or algorithmic interventions that might raise concerns about censorship, this approach empowers users to make more thoughtful decisions about what they share.
The team’s findings come at a critical time, as social media companies face mounting pressure to address the rapid spread of false information on their platforms. Major tech companies including Meta (parent company of Facebook and Instagram) and X have implemented various strategies to combat misinformation, but the problem remains persistent.
The next challenge for the Copenhagen researchers is to test their model in real-world settings. “We hope our proposal will inspire tech giants to think innovatively in the fight against misinformation,” say Jahn and Hendricks. “They could help us test the promising computer model to see whether engagement with low-quality content decreases and whether users become better at recognizing misinformation in real situations.”
If collaborations with major platforms aren’t feasible, the team plans to utilize simulated environments designed for research purposes.
The approach offers a potentially valuable tool in the growing arsenal of methods to combat online misinformation. While it wouldn’t eliminate the problem entirely, introducing thoughtful friction into digital environments could help slow the viral spread of false content that has become increasingly problematic in our interconnected world.
As social media continues to shape public discourse and information consumption, simple interventions like these may prove instrumental in creating healthier online ecosystems where quality information has a better chance to prevail over sensationalism.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
I’m curious to see if these ‘digital speed bumps’ would actually be effective in practice. Misinformation often spreads rapidly due to emotional reactions and confirmation bias. A short delay may not be enough to overcome those psychological factors.
That’s a good point. Overcoming human biases is a big challenge. The researchers would need to carefully design the friction to prompt more critical thinking, not just a momentary pause.
As someone with a background in the energy/commodities space, I think this idea could have real-world applications. Miners, producers, and investors often need to share time-sensitive information quickly. But a brief pause to reflect could catch some erroneous claims before they go viral.
As someone who works in the mining/commodities industry, I’m curious how this research could be applied to information sharing about things like new mineral discoveries, production updates, or commodity price movements. Slowing the spread of unverified claims could be very valuable.
This seems like a common-sense approach to a complex issue. Anything that gives users a moment to reflect before amplifying content could make a meaningful difference. Even small frictions may help counteract the acceleration of misinformation.
I wonder if these ‘digital frictions’ could be tailored for different types of content. For highly sensitive topics like elections or public health, stronger friction may be warranted compared to more innocuous news about, say, a new lithium mine opening.
That’s a good point. Context-specific friction levels could help strike the right balance between friction and user experience. The goal should be to slow problematic content without overly burdening legitimate information sharing.
Overall, this seems like a promising avenue for research. Anything that can slow the rapid spread of misinformation, without unduly restricting the free flow of information, is worth exploring further. I’m curious to see if this concept gains more traction in the tech industry.
This is an interesting study. Adding some friction to the sharing process could help slow the spread of misinformation, which is a major problem these days. Even a brief pause might make people think twice before sharing something questionable.