Listen to the article
Information Warfare: How Disinformation Campaigns Shape Public Opinion
Information warfare has become a global battlefield, with everyone who ventures online unwittingly drafted into conflict. As nations, organizations, and bad actors compete to control narratives, ordinary citizens face a barrage of deliberately misleading content designed to manipulate public opinion.
Unlike misinformation, which may be shared unwittingly, disinformation is strategically created to serve specific agendas. Its goals are clear: foment distrust, destabilize institutions, discredit legitimate sources of information, defame opponents, and undermine knowledge sources like science and journalism.
Major world powers routinely engage in these campaigns. Russian operatives have manipulated celebrity images to draw attention to anti-Ukraine propaganda, while Meta recently warned that China has significantly intensified its disinformation operations across social platforms. Though information warfare has historical precedents, the internet has dramatically amplified its reach and impact.
“The digital landscape has transformed disinformation from a targeted intelligence operation into a mass phenomenon,” explains Dr. Sarah Kreps, professor of government and technology policy at Cornell University. “The barriers to entry are lower, while the potential audience is exponentially larger.”
Foreign governments, internet trolls, extremist groups, profit-seeking entrepreneurs, and even specialized disinformation agencies exploit online channels to spread questionable content. These efforts intensify during periods of vulnerability – civil unrest, natural disasters, health crises like the COVID-19 pandemic, and military conflicts – when public anxiety peaks and the hunger for information grows.
Common Disinformation Tactics
Disinformation agents employ several recognizable strategies to spread their messages effectively:
“It’s just a joke” (Hahaganda) – Using humor, memes, or political comedy to downplay serious matters, attack opponents, minimize violence, or dehumanize groups. When challenged, perpetrators deflect criticism by claiming critics lack a sense of humor or are overly sensitive.
“Secret information” (Rumor-milling) – Claiming exclusive access to concealed truths with phrases like “The media won’t report this” or “The government doesn’t want you to know.” These messages often include calls to share widely, creating artificial urgency around unverified claims.
“People are saying” (False authorities) – Manufacturing credibility through various impersonation techniques, including sympathetic anecdotes, “concerned citizens,” alleged converts who changed positions, or fabricated experts. These personas may be entirely fictional or represent real individuals whose expertise is misrepresented.
“It’s all a conspiracy” – Constructing elaborate conspiracy narratives involving malevolent forces engaged in covert actions. These stories leverage past confirmed conspiracies to validate new unfounded claims and typically aim to delegitimize knowledge-producing institutions like universities, research facilities, government agencies, and news organizations.
“Good vs. evil” framing – Painting complex issues as simple moral battles, using accusations of extreme immorality to justify aggressive positions. Russia, for instance, regularly labels opponents as Nazis, pedophiles, or Satanists, while portraying their own forces as humanitarian helpers.
“False dichotomy” narratives – Presenting issues as having only two possible positions – theirs (the correct one) or an obviously wrong alternative. This tactic eliminates nuance and pressures audiences to choose sides while dismissing those who disagree as uninformed or deceived.
“Whataboutism” and victimhood claims – Deflecting criticism by pointing to others’ alleged misdeeds or claiming to be responding to prior wrongs. This competitive victimhood serves to justify actions that might otherwise appear indefensible.
Protecting Yourself in the Information War
Media literacy experts recommend approaching online content with the same skepticism you’d apply to advertising claims. Before accepting information, verify sources, cross-check with trusted outlets, and be particularly wary of content designed to trigger strong emotional responses.
“The most effective disinformation often contains grains of truth mixed with falsehoods,” notes Claire Wardle, co-founder of the Information Futures Lab. “It’s designed to exploit existing divisions and anxieties within societies.”
As digital technologies advance, including AI-generated content, distinguishing fact from fiction will require increased vigilance. Social media platforms continue implementing measures to combat disinformation, but the responsibility ultimately falls on users to critically evaluate the information they consume—and crucially, what they choose to share.
The battle for information integrity has real-world consequences for democratic institutions, public health, and international relations. As disinformation techniques evolve, so must our collective ability to recognize and resist them.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

