Listen to the article
Social media platforms have fundamentally transformed how news spreads, often blurring the lines between fact and fiction in ways that have profound implications for public understanding of current events.
In the aftermath of July’s devastating floods in Texas, social media feeds were flooded with dramatic videos showing rising waters, sirens, and terrified witnesses. Many viewers saw footage of an above-ground pool being swept away, a ranch submerged in murky water, and pedestrians fleeing from rapidly approaching floodwaters.
These viral videos were compelling and emotionally charged—but many weren’t actually from Texas at all. The pool video dated back to 2022, the ranch and gym footage was from Tennessee, and the pedestrians fleeing rising waters were actually running from China’s Qiantang River.
This pattern repeats whenever tragedy strikes. Some social media accounts deliberately search for similar footage from past events, removing identifying elements or adding dramatic sounds to maximize engagement. These tactics transform genuine concern and grief into profitable clicks, demonstrating how easily misinformation spreads when emotional content appears authentic.
The shift to social media as a primary news source has accelerated dramatically in recent years. According to Pew Research, 86% of American adults now get at least some news from digital devices, making them by far the most common news platform. For adults aged 18-29, social media has become the dominant news source, though 54% of Americans across all age groups rely on platforms like YouTube, X (formerly Twitter), Facebook, Snapchat, and TikTok for information.
This transformation builds on changes already underway in traditional journalism. Television news, once considered primarily a public service in the 1950s and 1960s, evolved in the 1980s when entertainment conglomerates began acquiring networks and expecting news divisions to generate profits like other departments. Cable television then ushered in the 24-hour news cycle and the rise of pundit commentary.
Social media has intensified these dynamics exponentially. When anyone can function as a journalist, content becomes endless, with opinion and fact intermingling seamlessly. The algorithm rewards sensationalism, creating an environment where misinformation thrives.
MIT researchers have found that false news can spread up to ten times faster than accurate reporting on social media platforms. When misinformation goes viral, corrections rarely receive the same attention or credibility. In what amounts to a contest between “false but interesting” and “true but boring,” the interesting content almost invariably prevails.
The platforms’ algorithmic design exacerbates the problem. Social media algorithms are engineered to maximize user engagement, keeping people online longer to view more targeted advertisements. These systems analyze user behavior—likes, shares, viewing habits—to deliver content likely to provoke reactions, regardless of accuracy.
A 2024 study from Indiana University revealed that just 0.25% of X users were responsible for between 73% and 78% of all tweets considered low-credibility or misinformation. Some of these accounts carried verification badges, lending an air of legitimacy to false information.
TikTok presents particular concerns given its predominantly young user base. An investigation by The Guardian found that more than half of the platform’s top mental health videos contained misinformation, ranging from harmless but ineffective advice to potentially dangerous recommendations about mental illness treatments. With American teenagers comprising 63% of TikTok’s users, these inaccuracies can significantly impact young people’s wellbeing and perspectives.
The proliferation of misinformation raises serious questions about democratic participation. An informed citizenry depends on reliable access to accurate information, yet social media’s amplification of falsehoods makes it increasingly difficult for voters to stay accurately informed while stoking political division.
Technology companies bear significant responsibility for addressing these issues. As the architects of these powerful information networks, platforms have both the resources and expertise to develop solutions. Similarly, companies creating AI tools that generate deepfake content should provide users with the knowledge and tools to identify synthetic media.
Beyond misinformation, data privacy represents another critical concern. Social media companies collect extensive user data with minimal regulatory oversight, selling this information to third parties for advertising, algorithms, and AI training. These practices not only risk exposing personal information but provide companies with the insights needed to keep users, particularly children, engaged for longer periods.
As young people increasingly rely on the internet for education, communication, and information, stronger protections against both misinformation and data exploitation have become essential safeguards for the digital generation.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

