Listen to the article

0:00
0:00

Platforms Profit from Disinformation While Failing to Enforce Their Own Rules

Digital platforms are unknowingly—or perhaps knowingly—funding creators who spread false information, according to a new investigation by Fundación Maldita.es. Despite having policies against monetizing disinformation, platforms like YouTube, TikTok, and X (formerly Twitter) continue to allow content creators to profit from spreading falsehoods.

The investigation reveals a troubling economic ecosystem where disinformation thrives because it generates engagement. Emotionally charged and eye-catching false content attracts attention, which platform algorithms then amplify, creating a cycle that benefits both the creator and the platform financially.

“Disinformation creators have different motivations,” the report notes. “Some seek ideological gain, others simply sow confusion, but many see disinformation as a straightforward path to making money.”

Most major platforms operate monetization programs that share advertising revenue with content creators. While their policies typically prohibit monetizing false information, Maldita.es found significant gaps between these stated rules and their enforcement.

On YouTube, researchers identified 20 channels with more than 21 million subscribers that regularly spread debunked climate misinformation. Despite YouTube’s explicit rules against monetizing content that contradicts scientific consensus on climate change, all these channels continue to display advertising—indicating they receive a share of ad revenue.

TikTok appears to have similar enforcement issues. The investigation found numerous accounts creating AI-generated videos of demonstrations and natural disasters to attract followers and qualify for monetization. One profile with over 38,000 followers posted more than 40 synthetic videos about snowfall in Russia, while others shared fabricated content about flooding in Gaza.

On X, more than 60 accounts that spread hoaxes about floods in Valencia displayed the platform’s blue checkmark, which is one of the requirements to monetize content on the platform. Many of these accounts participated in coordinated campaigns to spread disinformation following environmental disasters.

The European Union’s Digital Services Act (DSA) identifies such monetization systems as potential risk factors, especially when they reward engagement regardless of content accuracy. Under the DSA, large online platforms must prevent risks when their design facilitates the spread of harmful content.

“Disinformation about climate-related issues can have serious consequences for public safety if it leads to decisions not based on facts,” the report warns. “It also prevents people from accessing reliable information and fully exercising their constitutional right to receive truthful information.”

A major challenge in addressing this issue is the lack of transparency. Platforms generally do not disclose which accounts are being monetized or to what extent. Only Meta provides minimal information about accounts in its program and the total number of registered users, while Snapchat reports the total revenue distributed among creators. Other major platforms provide virtually no data.

This opacity makes independent assessment nearly impossible, forcing researchers to rely on circumstantial evidence to identify monetized disinformation.

Fundación Maldita.es recommends that platforms take greater responsibility by establishing clearer rules about which content can be monetized and developing better enforcement capabilities. They also suggest that platforms should make information about monetized content accessible to users, similar to how paid partnerships must be disclosed.

The organization calls on regulatory authorities to ensure compliance with the Digital Services Act by investigating potential breaches and requiring platforms to implement adequate measures to reduce the risks associated with their recommendation algorithms and monetization programs.

The investigation highlights a critical question for platforms and regulators alike: How can digital ecosystems be designed to reward quality content rather than engagement-driven disinformation? As long as spreading falsehoods remains profitable, the problem is likely to persist despite policies meant to prevent it.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

18 Comments

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.