Listen to the article

0:00
0:00

#

Private messaging platforms have emerged as potent vehicles for disinformation campaigns, even as regulatory attention remains focused primarily on open social media networks. This shift in the digital battleground has been clearly demonstrated in recent elections and crisis situations worldwide.

During Brazil’s 2024 municipal elections, manipulated political content spread rapidly through WhatsApp groups, reaching voters through what appeared to be trusted channels. In war-torn Ukraine, Telegram serves a dual purpose – providing critical emergency communication while simultaneously functioning as a conduit for Russian disinformation. Lebanese citizens have experienced similar dynamics on messaging platforms during periods of conflict.

Despite these documented concerns, private messaging services continue to operate in a regulatory gray area, according to a new report from the Forum on Information and Democracy. The study, co-chaired by Luxembourg and Ukraine with support from the NYU Stern Center for Business and Human Rights, represents a year-long collaborative effort involving government authorities, civil society organizations, and researchers.

The report identifies a fundamental challenge: current governance frameworks rely on increasingly outdated distinctions between “public” and “private” communication that fail to reflect how messaging platforms actually function today.

“What began as simple one-to-one or small group exchanges have evolved into complex ecosystems,” explained one researcher involved with the study. “Modern messaging platforms now incorporate broadcast channels, large group chats with thousands of participants, business messaging systems, advertising features, and AI-powered functions.”

This evolution has created hybrid spaces where information can circulate widely while maintaining the veneer of private, trusted communication. Most current regulations addressing disinformation fail to account for this reality, with platforms designated as “private” often explicitly excluded from oversight.

The research team’s analysis of regulatory frameworks across twelve jurisdictions found the UK Online Safety Act (OSA) stands as a rare exception. The OSA regulates all “user-to-user services” and imposes duties to address both “foreign interference” and false information that could cause physical or psychological harm.

Yet even this progressive legislation leaves significant questions about how messaging services with encryption should comply without compromising security features. This highlights the central regulatory dilemma: how to address disinformation on messaging platforms without undermining encryption that protects legitimate private communications.

The report outlines several key recommendations to address this challenge. First, it proposes attaching regulatory obligations to specific platform features rather than categorizing entire platforms as a single type of service. This feature-based approach recognizes that a one-to-one encrypted chat presents fundamentally different risks than a searchable broadcast channel or mass-forwarding capability.

“Treating these features differently isn’t weakening privacy protection – it’s essential for proportionate governance,” noted one policy expert familiar with the report. The European Commission’s recent designation of WhatsApp Channels as a Very Large Online Platform (VLOP) under the Digital Services Act represents a first step toward this feature-specific approach.

The second recommendation emphasizes safeguarding encryption in genuinely private communications. End-to-end encryption remains essential for privacy, free expression, and security – particularly for journalists, activists, and users in conflict zones or under repressive regimes. The report suggests limiting content governance obligations to non-encrypted or public-facing functionalities where platforms already exercise moderation control.

Finally, the report calls for initiatives strengthening societal resilience against disinformation. It highlights successful examples like Ukraine’s “Filter” project, which combines formal education with fact-checking partnerships, and Ireland’s Media Literacy Ireland Network, which coordinates efforts across broadcasters, NGOs, and regulators.

For platform companies, the recommendations include developing more in-app tools to enhance user awareness, such as access to fact-checking resources and clearer distinctions between private and broadcasting features.

As messaging platforms continue to evolve and shape democratic discourse, the report concludes that effective governance requires frameworks that reflect how these services actually function in practice – not how they were originally conceived. This approach aims to balance information integrity with privacy protection, recognizing that neither needs to be sacrificed for the other when regulation is thoughtfully designed.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.