Listen to the article
Climate misinformation continues to erode public trust in climate science despite widespread acceptance of climate change as real and human-caused, according to experts convened at a recent Harvard Radcliffe Institute seminar. The gathering of ten specialists, including trust and safety professionals, science communicators, and AI researchers, explored the paradox of online climate misinformation and potential solutions.
Despite most Americans believing in climate change, major technology platforms have largely failed to effectively counter misinformation that undermines this consensus. The seminar, organized by University of Wisconsin-Madison professor Jodi Schneider and Northwestern University’s Dr. Rod Abhari, revealed significant structural problems in how platforms approach climate misinformation.
Unlike their swift response to events such as the January 6 Capitol insurrection or COVID-19 misinformation, platforms have been reluctant to enforce similar measures against climate falsehoods. This discrepancy stems from how trust and safety teams prioritize content moderation, focusing on three risk assessment criteria: imminence of harm, likelihood of occurrence, and severity of impact.
Climate misinformation clearly meets the thresholds for likelihood and severity—the potential harms are both certain and catastrophic—but lacks the immediacy that typically triggers platform action. The long-term, diffuse nature of climate change consequences doesn’t fit into frameworks designed for responding to immediate crises.
“Platforms respond to immediate violence or destruction, but not diffuse, long-term harms,” noted one participant speaking under Chatham House Rule. “Additionally, there’s no authoritative institutional body equivalent to the CDC or WHO to provide clear enforcement guidance.”
TikTok exemplifies this problem. Despite having policies that ostensibly prohibit climate misinformation, enforcement remains minimal. Investigations have found that the platform fails to consistently remove climate change denial content, highlighting the gap between policy and practice.
The seminar identified a fundamental resource imbalance that compounds these issues. Fossil fuel industry resources for shaping public opinion vastly exceed funding available for climate communication. Organizations like Climate Nexus have seen their digital advocacy funding diminish, while over 1,600 oil and gas lobbyists attended COP30—approximately one in every 25 participants.
Social media’s engagement-driven business model exacerbates this asymmetry. Climate misinformation generates controversy and debate, which keeps users on platforms longer and increases advertising revenue. Fossil fuel interests can exploit these algorithmic preferences, while climate communicators lack comparable resources to compete for attention.
Researchers have documented sophisticated tactics employed by climate skeptics, including the amplification of fringe scientists who receive disproportionate engagement compared to mainstream researchers. These individuals often misinterpret legitimate studies to support false claims and reframe content moderation as censorship, positioning themselves as truth-tellers facing institutional suppression.
The timing couldn’t be worse, as most major tech companies have publicly scaled back their trust and safety teams in recent years, leaving fewer resources to address an expanding problem.
Rather than pursuing a single solution, seminar participants identified several leverage points that could collectively shift platform behavior. European regulations on climate-related claims could create compliance requirements with global effects. The United Kingdom’s Digital Markets, Competition and Consumers Act now allows sanctions for greenwashing without litigation—a model that could extend to platform-hosted content.
Advertiser accountability offers another pressure point. Campaigns highlighting climate misinformation adjacent to brand advertising could create business incentives for platform policy changes, an approach that has successfully forced platforms to demonetize problematic content in other contexts.
The development of alternative models on smaller, community-oriented platforms could demonstrate effective climate misinformation policies that larger platforms might eventually adopt. Success at a smaller scale could establish proof of concept and create competitive pressure.
The seminar also highlighted significant gaps between academic research, trust and safety practice, and policy advocacy. Academics often lack understanding of platform operational constraints, while trust and safety professionals rarely have access to climate science expertise.
Climate misinformation has tangible consequences today, from conspiracy theories leading to threats against FEMA workers during hurricane response to economic losses from delayed climate action. The property insurance crisis and infrastructure damage linked to climate change affect communities across America.
“Current trust and safety frameworks were never designed to address slow-moving, politically contested, institutionally complex challenges like climate misinformation,” concluded one participant. “The question is whether those frameworks can evolve—through regulatory pressure, market incentives, competitive dynamics, or institutional innovation—to meet the moment.”
For platforms, policymakers, and civil society, the message is clear: climate misinformation persists not because it’s technically difficult to address, but because existing incentive structures don’t demand action. Changing those structures requires sustained pressure across multiple fronts to prevent further erosion of public trust in climate science.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


17 Comments
It’s concerning that platforms have been reluctant to enforce measures against climate misinformation, especially when they act swiftly on other types of harmful content. This needs to change.
It’s concerning that platforms have been reluctant to take action against climate misinformation, especially when they are quick to respond to other types of harmful content.
Misinformation can undermine public trust in climate science, which is concerning. Platforms need to develop more robust strategies to identify and remove such content.
I share your concern. Platforms should work closely with experts to improve their ability to detect and respond to climate misinformation effectively.
Misinformation on climate change can have serious consequences. Platforms need to prioritize addressing this issue with the same urgency as other high-risk content.
Absolutely. Consistent and rigorous enforcement of policies against climate misinformation is crucial for maintaining public trust in science.
This is a critical issue that deserves more attention. Platforms have a responsibility to address climate misinformation with the same urgency as other harmful content.
The discrepancy in how platforms approach climate misinformation versus other types is puzzling. Consistent and rigorous enforcement is needed to protect the integrity of online discourse.
Agreed. Platforms must apply the same level of scrutiny and response to climate misinformation as they do for other high-risk content.
This is a concerning trend that deserves more attention. Platforms must apply the same level of scrutiny and response to climate misinformation as they do for other harmful content.
This is an important issue. Misinformation on climate change can have serious consequences. Platforms need to be more proactive in identifying and addressing such content.
I agree. The inconsistent moderation approach compared to other misinformation issues is concerning. Platforms must apply the same rigor to climate misinformation.
The inconsistent approach to moderating climate misinformation is puzzling. Platforms must apply the same level of scrutiny and enforcement to this type of content.
Addressing climate misinformation is crucial for maintaining public trust in science. Platforms should leverage AI and expert input to quickly identify and remove such content.
Absolutely. Prioritizing user safety over profits should be the top priority when it comes to misinformation that can harm public discourse and well-being.
This is a complex issue, but platforms have a duty to address climate misinformation with the same rigor as other high-risk content. Consistent enforcement is crucial.
I agree. Platforms should work closely with experts to develop more robust strategies for identifying and removing climate misinformation.