Listen to the article
In a significant advancement for cybersecurity intelligence, Cyabra Strategy Ltd. has contributed key research to a newly published NATO report examining the evolving landscape of social media manipulation and AI-driven disinformation campaigns. The announcement, made yesterday in New York, highlights growing concerns about the sophistication of inauthentic online behavior that can undermine democratic processes and geopolitical stability.
The NATO Strategic Communications Centre of Excellence (NATO StratCom COE) commissioned Cyabra to evaluate whether existing platform detection models remain effective against emerging, AI-enabled influence operations. The resulting report, titled “Social Media Manipulation for Sale, 2025 Experiment on Platform Capabilities to Detect and Counter Inauthentic Social Media Engagement,” provides an in-depth analysis of how malicious actors can easily purchase manipulation services across major social media platforms.
Cyabra’s research reveals a critical paradigm shift in disinformation tactics. Rather than relying on high-volume, obvious amplification behaviors, modern inauthentic operations are utilizing AI to create human-like fake accounts designed to infiltrate authentic communities and manipulate conversations from within trusted spaces.
“We’re witnessing a new frontier in disinformation,” said Dan Brahmy, CEO of Cyabra. “These networks no longer operate in isolation but strategically insert themselves into high-visibility threads, engaging directly with authentic users and communities to shape narratives from the inside.”
The research identified several sophisticated techniques being employed by these next-generation disinformation operations, including context-aware, multilingual content generated at scale with AI. These operations also feature AI-generated visuals and text that match the tone of targeted discussions, presenting a more natural appearance that evades traditional detection methods.
Additionally, these operations have shifted to lower-volume, distributed activity patterns that reduce detectable coordination signals, making them significantly harder to identify through conventional means. Rather than creating isolated spam loops, these fake accounts now strategically place crafted comments under posts by influencers, journalists, and public figures to maximize visibility and impact.
Dr. Gundars Bergmanis-Korats, AI Laboratory Chief at NATO StratCom COE, emphasized the growing threat: “This report underscores the need to prioritize cross-platform behavioral detection by identifying synchronized patterns in timing, tone, and relational dynamics, as these increasingly indicate sophisticated, AI-enabled manipulation.”
The implications extend far beyond routine spam or promotional content. As AI technology lowers the cost of generating credible online personas and automates content orchestration across platforms, hostile actors can deploy influence operations more rapidly, with more persuasive content, and with fewer detectable patterns of coordination.
This evolution presents serious challenges for platforms, governments, and organizations seeking to maintain information integrity. The report concludes that manipulation remains alarmingly easy to execute and increasingly difficult to reliably prevent, with direct implications for democratic resilience and public trust in digital information environments.
Cyabra’s involvement with NATO StratCom COE solidifies its position as a leader in disinformation detection. The company provides decision-grade clarity in contested information environments, enabling institutions to respond proportionately and effectively to coordinated online manipulation campaigns.
In corporate developments, Cyabra has entered into a business combination agreement with Trailblazer Merger Corporation I (NASDAQ: TBMC), a blank-check special-purpose acquisition company, signaling potential expansion of its market presence and capabilities.
The full NATO report is available on the NATO StratCom COE website and offers comprehensive insights into the current state of social media manipulation and platform capabilities to combat these evolving threats.
As generative AI technologies continue to advance, the battle against sophisticated disinformation is likely to intensify, requiring new detection methodologies and cross-platform collaboration to safeguard information integrity in an increasingly complex digital landscape.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


17 Comments
This is an important step in the right direction. The NATO StratCom COE’s collaboration with Cyabra to evaluate platform capabilities is a welcome initiative. Strengthening defenses against AI-enabled manipulation is essential for maintaining trust in online information.
It’s concerning to see how easily manipulation services can be purchased across major social media platforms. This underscores the need for stronger regulations and enforcement to combat AI-enabled disinformation campaigns.
Absolutely. Platforms, governments, and researchers must collaborate to stay ahead of these evolving threats and protect the integrity of online discourse.
As someone with a background in cybersecurity, I’m encouraged to see NATO taking such a proactive approach to addressing AI-driven disinformation. Collaborative efforts between governments, researchers, and platforms are essential for effective countermeasures.
I’m curious to learn more about the specific tactics and techniques used by these malicious actors. The report’s findings on the shift away from obvious amplification behaviors towards more human-like fake accounts is particularly intriguing.
Yes, understanding the mechanics behind these AI-driven disinformation campaigns is crucial. The report’s in-depth analysis will likely provide valuable insights for improving detection and countering efforts.
This is an important issue that deserves serious attention. The use of AI to create human-like fake accounts is particularly troubling and highlights the need for robust detection and response capabilities. I hope the NATO report’s findings will spur concrete actions to address this threat.
This is an important issue that needs to be addressed. AI-driven disinformation campaigns pose a serious threat to democratic processes and stability. The findings from this NATO report highlight the evolving tactics used by malicious actors to manipulate social media platforms.
Agreed, the use of AI to create human-like fake accounts is particularly concerning. Platforms must stay vigilant and continue improving their detection models to counter these sophisticated influence operations.
This is a concerning development, but I’m glad to see NATO and Cyabra taking it seriously. Maintaining the integrity of online discourse is crucial for preserving democratic processes and geopolitical stability.
Agreed. The findings underscore the need for a multi-stakeholder approach to combating these sophisticated influence operations. Vigilance and continuous improvement will be key.
As someone with an interest in mining and commodities, I’m concerned about the potential impact of these disinformation campaigns on related industries and markets. Accurate, reliable information is critical for investors and industry stakeholders.
Agreed. Widespread misinformation could lead to distorted market perceptions and suboptimal decision-making. Vigilance and proactive measures are needed to safeguard the integrity of these sectors.
I’m curious to learn more about the specific tactics and techniques used by these malicious actors. The report’s insights on the shift towards more human-like fake accounts could have important implications for the mining and commodities sectors.
Absolutely. Understanding the evolving nature of these disinformation campaigns is crucial for developing effective countermeasures and protecting the integrity of information in our industries.
The findings on the ease of purchasing manipulation services across major platforms is quite alarming. This underscores the need for platforms to drastically improve their detection and response capabilities.
Absolutely. Platforms must invest heavily in advanced AI and machine learning tools to stay ahead of these evolving threats. Relying on user reporting alone is no longer sufficient.