Listen to the article
The business of disinformation has transformed from a primarily political phenomenon into a thriving economic ecosystem, fueled by the profit-driven mechanics of social media and digital advertising. What once operated as propaganda has evolved into an industrial complex where financial incentives drive the creation and distribution of misleading content.
At its core, the internet’s business model operates on a simple equation: engagement generates revenue. Content creators quickly learned that provocative, emotionally charged material attracts the most interaction. Research in marketing confirms that shocking, highly emotional, and polarizing content—particularly that which divides audiences into opposing groups—consistently generates higher engagement. This engagement-driven approach feeds directly into social media algorithms that further amplify such content.
The financial structure supporting disinformation resembles a sophisticated supply chain. At the production level, various actors including state entities, politicians, and influencers create misleading narrative campaigns designed to manipulate public opinion. Platform algorithms and recommendation systems serve as intermediaries, amplifying this content to wider audiences.
Operating alongside these visible participants is a shadow industry of “bot farms” that deploy fake accounts and automated scripts with two primary objectives: committing advertising fraud and amplifying disinformation on demand. Completing this ecosystem are advertisers whose spending, often unknowingly, finances the entire operation through poorly regulated advertising technology (AdTech) systems.
The economic scale of this industry is substantial. A joint report from the Carter Center and McCain Institute reveals that over 81% of traffic to known disinformation sources has direct access to programmatic advertising networks. According to NewsGuard, approximately 1.68% of digital advertising budgets—roughly $2.4 billion in the US and $6 billion globally—is redirected to fake news websites.
Investigative journalism has documented how advertisements from reputable companies frequently appear on websites spreading misleading information, with technology platforms continuing to collect advertising revenue regardless of the content being monetized. Meanwhile, controversial influencers leverage inflammatory content to build audiences that can be monetized through podcasts, personal branding, and even political influence.
Tech giants play a central role in this economy. Companies like Google dominate search, video, and display advertising markets, while Meta controls social media advertising and Amazon leads in e-commerce advertising. Despite public commitments to combat misinformation, these companies face an inherent conflict of interest—their business models depend on the very user engagement that controversial content reliably generates.
Meta’s recent decision to scale back fact-checking cooperation in the United States exemplifies this tension. As long as AdTech companies continue distributing advertising spending without meaningful accountability or oversight, the economic incentives to spread disinformation will remain firmly in place.
Addressing this challenge requires viewing disinformation not as an accidental byproduct but as a predictable outcome of current digital advertising and influencer marketing models. Researchers in market-oriented disinformation propose borrowing established regulatory mechanisms from financial markets: Know Your Customer (KYC), Duty to Care, and Due Diligence.
KYC principles would require verification of who funds advertising campaigns, limiting the flow of “dark money” into digital influence operations. A Duty to Care framework would establish legal responsibility for marketers to ensure their spending doesn’t inadvertently support disinformation or hate speech. Due Diligence would require advertising agencies and AdTech intermediaries to implement processes preventing client budgets from funding harmful content or being wasted on fraudulent activity.
Advocacy organizations focused on digital marketing accountability argue that implementing such frameworks could significantly reduce both the spread of disinformation and the financial incentives that currently make harmful content profitable. By disrupting the economic model that makes disinformation lucrative, these interventions target the root causes rather than merely addressing symptoms.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
The ‘engagement-driven’ model of social media is a double-edged sword – it boosts interaction but also amplifies polarizing, emotionally-charged content. Developing alternative metrics and incentives to promote more factual, nuanced discourse is crucial.
Agreed. Platforms need to rethink their algorithms to deprioritize sensationalism and instead elevate authoritative, balanced information sources.
Fascinating article on the complex business behind disinformation. It’s concerning how profit motives and social media algorithms can fuel the spread of misleading content. Countering this will require multi-pronged strategies targeting both the producers and the platforms.
The article highlights the complex, multi-faceted challenge of combating disinformation. Holistic solutions involving tech companies, policymakers, researchers, and the public will be needed to tackle this growing threat to informed discourse.
Absolutely. Collaboration across stakeholders is key to developing effective, long-term strategies to address the systemic issues underlying the disinformation economy.
This ‘disinformation economy’ is a troubling phenomenon. I’m curious to learn more about the specific strategies and policy solutions being proposed to address the financial incentives driving the creation and distribution of misleading content.
As someone interested in the mining/commodities sector, I wonder how disinformation campaigns might target these industries to influence public opinion and markets. Maintaining transparency and robust fact-checking will be vital to counter any such efforts.