Listen to the article
YouTube Monetizes Climate Misinformation Despite Own Policies, Investigation Finds
A new investigation by Spanish fact-checking organization Fundación Maldita.es has revealed that YouTube consistently violates its own climate misinformation policies by monetizing content that contradicts scientific consensus on climate change.
The report, titled “YouTube Lies: How the Largest Video Platform Finances Climate Misinformation, Going Against its Own Policies and the EU Digital Services Act,” documents how the platform continues to run advertisements on videos containing debunked climate claims, despite explicit prohibitions in its content policies.
Researchers identified 20 YouTube channels with a combined 21 million subscribers that spread climate misinformation previously fact-checked by Maldita. These aren’t fringe creators operating in obscure corners of the platform—half of them rank among Spain’s top 50 most subscribed news and politics channels according to analytics service SubSub.
The investigation found videos containing climate misinformation that had collectively garnered more than 3 million views. All these videos displayed advertisements, generating revenue for both content creators and YouTube itself, in direct violation of the platform’s published guidelines.
“This pattern suggests YouTube’s climate misinformation policies exist primarily on paper,” said Clara Jiménez Cruz, co-founder of Maldita.es. “When it comes to actual enforcement, particularly when ad revenue is at stake, our findings indicate a troubling disconnect between policy and practice.”
The researchers went further by formally reporting these videos to YouTube for violating its advertising rules. The platform’s response—or lack thereof—revealed an additional layer of non-compliance. In all 20 cases, YouTube neither took action on the content nor replied to the reports submitted by Maldita.es.
This silence represents what the organization calls “a flagrant and systematic violation of the EU Digital Services Act,” which legally requires platforms to “inform complainants without undue delay of their reasoned decision and of the possibility of out-of-court dispute settlement.” More than a month after filing these reports, Maldita.es has received no response from YouTube.
The investigation also highlighted a contradictory aspect of YouTube’s approach. Many of the videos in question already display YouTube’s climate information banners, which according to the platform’s 2025 Digital Services Act risk assessment, appear on “videos related to topics prone to misleading information.” This suggests YouTube’s systems had already flagged these videos as potentially problematic—yet the platform continued to monetize them anyway.
“YouTube had clearly identified these videos as high-risk for misinformation, added informational banners, but still allowed them to generate revenue,” explained Miguel Crespo, research director at Maldita.es. “This undermines any claim that they simply lacked the capacity to identify problematic content.”
The findings raise serious questions about the efficacy of content moderation at one of the world’s largest information platforms. If YouTube’s climate misinformation policies have no meaningful enforcement mechanism, they effectively create a perverse incentive structure where creators are financially rewarded for producing misleading content.
This investigation comes amid increasing regulatory scrutiny of major tech platforms in the European Union. The Digital Services Act, which came into full force for very large online platforms in February 2024, imposes significant obligations regarding content moderation, transparency, and user rights.
For climate scientists and activists, the findings represent another challenge in combating misinformation. Dr. María Rodríguez, climate researcher at Universidad Complutense de Madrid, called the situation “deeply concerning.”
“When scientific consensus is presented alongside misleading content as if they’re equally valid viewpoints, and both are monetized, it creates a false equivalence that damages public understanding of climate science,” she said.
YouTube parent company Alphabet has yet to respond to the investigation’s findings, but the report adds to growing evidence that major tech platforms continue to struggle with—or deliberately avoid—enforcing their own content policies when they conflict with revenue generation.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
The fact that some of the top channels spreading climate misinformation are being monetized is very troubling. YouTube needs to seriously re-evaluate their approach to content moderation.
It’s disappointing to see YouTube profiting from climate misinformation, especially given their stated commitment to combating the spread of false and harmful content. They need to do better.
It’s concerning to see such a large platform like YouTube continuing to monetize climate misinformation, despite their stated policies. This undermines public trust and understanding of the science.
YouTube should be held accountable for failing to uphold their own standards. Consistent enforcement is key to preventing the spread of harmful disinformation.
This investigation highlights the challenges in regulating online content and the need for stronger enforcement mechanisms. Platforms must take more responsibility for the information they amplify.
I agree, platforms can’t just hide behind outdated policies. They need to proactively monitor and remove misinformation, especially on critical issues like climate change.
This report underscores the ongoing struggle to hold big tech platforms accountable for the information they amplify. Stronger regulations and enforcement are clearly needed in this area.
Interesting investigation. I’m curious to see how YouTube responds and what steps they take to enforce their own climate misinformation policies more effectively.
Agreed, this seems like a clear violation of their policies. The platform needs to be more proactive in monitoring and removing such content.