Listen to the article
Cyabra Strategy Ltd. has contributed to a significant new report published by the NATO Strategic Communications Centre of Excellence (NATO StratCom COE), revealing alarming developments in AI-powered disinformation tactics. The report, titled “Social Media Manipulation for Sale, 2025 Experiment on Platform Capabilities to Detect and Counter Inauthentic Social Media Engagement,” examines the evolving commercial market for inauthentic social media engagement.
NATO StratCom COE commissioned Cyabra, a real-time disinformation detection company, to investigate whether existing platform detection models remain effective against sophisticated AI-enabled influence operations. The findings paint a concerning picture of a “new frontier” in disinformation tactics that are increasingly difficult to identify.
According to the report, manipulation remains disturbingly accessible and challenging to prevent, posing direct threats to democratic resilience, geopolitical stability, and public trust in digital information. The research reveals that as AI technology advances, the cost of generating credible fake personas has dropped substantially, allowing hostile actors to deploy more persuasive content with less detectable coordination patterns.
Cyabra’s research identifies a critical shift in the threat landscape: inauthentic operations have evolved from high-volume amplification behavior to more subtle, human-like accounts designed to blend into authentic online communities. These sophisticated operations now rely on context-aware, multilingual content generated at scale using AI, including convincing visuals and text that match the tone of targeted discussions.
“Modern disinformation actors are moving away from obvious spam networks toward strategic infiltration of genuine conversations,” explained Dan Brahmy, CEO of Cyabra. “Instead of operating in isolated echo chambers, these AI-powered accounts now strategically insert themselves into high-visibility threads, often placing carefully crafted comments under posts by influencers, journalists, and public figures.”
The research further indicates that these operations now display more organic network patterns, with fake accounts interacting not just with each other but also with authentic users and communities. This approach significantly reduces detectable coordination signals that traditional safeguards might identify.
Dr. Gundars Bergmanis-Korats, AI Laboratory Chief at NATO Strategic Communications Centre of Excellence, emphasized the importance of cross-platform behavioral detection. “Identifying synchronized patterns in timing, tone, and relational dynamics increasingly indicates sophisticated, AI-enabled manipulation,” he noted. “Cyabra’s research and analytical support were instrumental in helping us test these dynamics at scale and translate complex platform behavior into actionable insights.”
The findings come at a crucial time when social media platforms face mounting pressure to combat disinformation, particularly as several major elections approach globally. The research suggests that current platform detection methods may be inadequate against these evolving threats, potentially leaving democratic processes vulnerable to interference.
Cyabra’s contribution to the NATO report solidifies its position as a global leader in the fight against disinformation. The company provides decision-grade clarity in contested information environments, enabling institutions to respond effectively to coordinated online manipulation.
The company has recently entered into a business combination agreement with Trailblazer Merger Corporation I (NASDAQ: TBMC), a blank-check special-purpose acquisition company, potentially signaling expansion plans to scale its technology as disinformation threats continue to evolve.
The full report is available for download from the NATO StratCom COE website, offering detailed insights into the mechanics of modern disinformation campaigns and recommendations for countering these sophisticated threats.
As AI-powered disinformation techniques continue to advance, the collaboration between organizations like Cyabra and NATO represents a crucial step in developing effective countermeasures to protect information integrity in an increasingly complex digital landscape.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
This report underscores the critical importance of investing in robust disinformation detection and mitigation strategies. As AI advances, the threats to democratic institutions and public trust will only intensify. Kudos to NATO for commissioning this important research.
This is a concerning report on the evolving disinformation landscape and the growing threat of AI-powered manipulation tactics. It’s crucial that digital platforms and policymakers stay vigilant and enhance their detection capabilities to safeguard public trust and democratic resilience.
The declining costs of generating credible fake personas is a worrying trend that could empower more bad actors to spread disinformation at scale. I hope this report prompts policymakers to strengthen regulations and platforms to enhance their detection capabilities.
Absolutely. Addressing this issue will require a concerted effort from all stakeholders to stay ahead of the curve and safeguard the integrity of online discourse.
Interesting to see NATO commissioning research into these emerging AI-driven disinformation tactics. As technology continues to advance, it’s clear that the battle against online manipulation will only become more challenging. Proactive measures and cross-stakeholder collaboration will be key.
Agreed. Staying ahead of the curve on these sophisticated tactics will require ongoing innovation and a multi-pronged approach involving platforms, governments, and civil society.
Fascinating to see NATO taking a proactive approach to studying these emerging AI-driven disinformation tactics. As the report highlights, the evolving commercial market for inauthentic social media engagement poses serious risks that need to be urgently addressed.
The declining costs of generating credible fake personas is a worrying trend that could empower more bad actors to spread disinformation at scale. I hope this report prompts platforms and policymakers to strengthen their detection and mitigation efforts.
This report underscores the urgent need for robust, cross-stakeholder collaboration to combat the rising threat of AI-powered disinformation. Kudos to NATO and Cyabra for shedding light on these concerning developments and the challenges ahead.
Fascinating to see NATO taking a proactive approach to studying these emerging AI-driven disinformation tactics. As the report highlights, the evolving commercial market for inauthentic social media engagement poses serious risks that need to be urgently addressed.
The findings about the growing accessibility of AI-powered disinformation tactics are quite alarming. It’s clear that the battle against online manipulation is far from over, and continued vigilance and innovation will be essential to protect the public.