Listen to the article

0:00
0:00

In a world where internet access and social media platforms are ubiquitous, security experts are increasingly alarmed by how these connectivity tools are being weaponized to spread discord and division at unprecedented speed and scale.

A recent United Nations global survey reveals that over 85% of respondents worry about disinformation’s impact, while 87% believe that misinformation, disinformation, and malinformation (MDM) campaigns have already negatively affected their country’s politics and will significantly influence future elections.

The consequences of these MDM campaigns extend far beyond digital spaces, making them attractive tools for various threat actors including nation-states, advanced persistent threat groups, cybercriminals, and hacktivists who increasingly employ deceptive tactics to pursue their objectives.

The evolution of misinformation—content that is false but not created with malicious intent—has a troubling history. In 2003, exaggerated reports about Iraq’s weapons of mass destruction contributed to the U.S.-led invasion that resulted in millions of displaced Iraqis and nearly 190,000 deaths. During the 2016 U.S. presidential election, the “Pizzagate” conspiracy theory, sparked by leaked emails, led an armed man to threaten employees at a Washington D.C. pizzeria. Throughout the COVID-19 pandemic, widespread health misinformation directly contributed to disease spread by undermining public health guidelines.

Today’s misinformation campaigns have grown more sophisticated as threat actors deliberately exploit social media echo chambers to propagate fake news. They manipulate algorithms, deploy deepfakes, and hijack trending topic pages to efficiently spread deceptive content.

Disinformation—deliberately false information designed to deceive—represents an even more calculated threat. State-sponsored actors, hacktivists, and criminal groups conduct global disinformation operations through propaganda and psychological warfare to influence elections and escalate geopolitical tensions.

Russian interference in the 2016 U.S. presidential election exemplifies how state-sponsored actors can leverage social media for multifaceted disinformation campaigns that erode public confidence in democratic institutions. Similar tactics were observed during Brexit and Scottish independence referendums, with a U.S. Senate report identifying 150,000 Russian-linked Twitter accounts disseminating Brexit-related messages before the vote.

The 2017 French presidential election saw various actors spread doctored tweets and emails to undermine public trust in specific candidates, while ongoing Russian disinformation campaigns in Ukraine continue to shape narratives and exploit political and social divisions.

Malinformation, a more recent development, involves releasing truthful but private information with malicious intent. The 2012 LinkedIn data breach, which exposed millions of users’ passwords, led to widespread extortion attempts. The GamerGate controversy that began in 2014 escalated into a vicious online harassment campaign targeting women and marginalized communities in the gaming industry. During the 2019 Hong Kong protests, doxxing campaigns exposed private information of activists, police officers, and journalists.

These threats have increasingly migrated to the corporate world. In 2018, Broadcom Inc. received a forged memo supposedly from the U.S. Department of Defense requesting a review of their $19 billion acquisition of CA Technologies. Though quickly identified as fraudulent, the fake document briefly caused both companies’ stocks to fall and challenged national security measures in the public eye.

MDM threats in the corporate sector aim to damage brand reputation, erode customer trust, and cause financial losses. Disinformation-as-a-Service (DaaS) models now allow malicious actors to purchase customized MDM campaigns tailored to specific objectives.

The intersection between MDM campaigns and cybersecurity manifests in several ways. Threat actors operate on the same terrain, using social media platforms as amplifiers while leveraging networking infrastructure and routing services to distribute malware. They employ similar attack methods, manipulating victims’ anxieties and emotions just as cybercriminals do with phishing lures. Both disinformation campaigns and cybercrime utilize illegal dark web transactions and various forms of fraud, often driven by financial incentives.

Robust cybersecurity practices can help protect organizations from these evolving threats. Advanced endpoint protection solutions that continuously monitor network traffic can identify suspicious patterns. Ongoing monitoring of information sources, social media channels, and online forums is critical for tracking the spread of false information, especially during crisis events.

Organizations should implement best practices including role-based access controls, multi-factor authentication, encryption, and secure coding practices to safeguard data integrity. Regular software patching and updates reduce vulnerabilities that malicious actors might exploit.

However, MDM campaigns are not solely a technical problem—they involve psychological manipulation and exploitation of cognitive biases. Security awareness training remains essential for educating employees about disinformation risks and teaching them to recognize and report suspicious activities.

As MDM campaigns continue to impact geopolitical, social, and corporate spheres, a multi-dimensional security strategy combining robust preventive measures with extended detection and response capabilities is vital for organizations navigating this complex threat landscape.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

22 Comments

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.