Listen to the article
Social media platforms have been caught in a firestorm of criticism for allegedly exploiting user anger to drive engagement, according to whistleblowers and a recent BBC investigation reported by Times UK. Former employees from Facebook, X (formerly Twitter), and TikTok have come forward with claims that these platforms systematically prioritize content that triggers strong emotional reactions, regardless of potential harm.
Matt Motyl, who worked as a senior researcher at Facebook and Meta, revealed how company executives, including CEO Mark Zuckerberg, rushed the development of Reels to compete with TikTok’s growing popularity. This happened despite internal data showing the video format generated significantly higher rates of bullying, hate speech, and violent content compared to standard feeds.
“The political content that gets the most engagement is typically misinformation… it’s typically very toxic,” Motyl explained. He described how internal safety teams frequently clashed with product managers whose performance metrics and bonuses were tied to user engagement statistics rather than platform safety.
The problem extends beyond Meta’s properties. Lisa Jennings Young, who headed design, trust, and safety at Twitter from 2019 to 2022, characterized the platform as operating on a “rage-based business model.” Another former Twitter employee, Marc Burrows, who worked on the platform’s curation team, pointed to significant changes after Elon Musk’s takeover.
According to Burrows, Musk dismantled safeguards that had been designed to limit the spread of unverified or harmful content. He cited the recent Southport riots in the UK as a prime example, where unverified information about the attacker spread rapidly across the platform, contributing to public unrest and violence.
“It’s complete gaming of freedom of speech,” Burrows said, alleging that Musk’s control over the algorithm allows for selective amplification of certain content while suppressing others.
The BBC investigation, led by social media investigations correspondent Marianna Spring, featured interviews with numerous former employees across the major platforms. One anonymous TikTok trust-and-safety worker described receiving instructions to permit content that bordered on harmful—including material linked to terrorism, sexual violence, abuse, and human trafficking—specifically to drive user engagement.
Ruofan Ding, who worked as a machine learning engineer at TikTok, explained how the platform’s recommendation algorithms evolved to make conspiracy theories and problematic content increasingly visible, particularly during extended browsing sessions when the algorithm has gathered more user preference data.
Internal documents from Meta revealed particularly troubling metrics, according to Spring’s reporting. The company measured success partly by tracking outrage, with internal data showing that posts generating negative comments were more likely to attract traffic and engagement. An anonymous Meta engineer described a culture of paranoia and reactive decision-making driven by competitive pressure from TikTok, which led to progressively looser content standards.
When contacted for comment, the platforms defended their policies. TikTok emphasized that user safety is “our most important work,” pointing to the removal of nearly 90 million underage accounts and the implementation of automatic protections for teenage users. X stated it had made its algorithm open-source to promote transparency and healthy conversation. Meta referenced its strict policies and safety measures, including systems that monitor millions of Reels globally for potential violations.
The whistleblower testimonies highlight a troubling pattern across the industry where algorithmic design appears to prioritize business metrics over user wellbeing. Teenagers and vulnerable users can be repeatedly exposed to harmful content for hours, even after reporting or attempting to block it. One whistleblower issued a stark warning to parents, advising them to keep children away from these platforms whenever possible, arguing they are deliberately engineered to be addictive.
The revelations come amid growing regulatory scrutiny of social media companies in multiple countries, with lawmakers increasingly concerned about the societal impacts of engagement-driven algorithms and their influence on public discourse and mental health.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
This is a disturbing trend that needs to be addressed. Social media companies should be required to implement robust content moderation and safety measures, even if it means sacrificing some user engagement.
It’s disappointing to see social media companies prioritize profits over their responsibility to provide a safe and trustworthy platform. Stricter regulations and oversight are clearly needed in this industry.
This is an alarming revelation. Social media companies should prioritize user safety and well-being over engagement metrics. Amplifying misinformation and toxic content is unethical and can have serious societal consequences.
I’m not surprised to hear this. Social media platforms are designed to be addictive, and they often sacrifice user welfare for the sake of profits. More transparency and accountability is needed in this industry.
Agreed. These companies need to be held accountable for the harm they’re causing. Whistleblowers play a crucial role in exposing these issues.
While I’m concerned about the issues raised, I’m curious to learn more about the specific data and internal processes that led to these decisions. Were there no checks and balances in place to prevent such harmful practices?
That’s a good point. More details on the decision-making process and internal policies (or lack thereof) would help shed light on how these platforms prioritize engagement over user safety.