Listen to the article
In a significant development at the intersection of cybersecurity and international defense, Cyabra Strategy Ltd. has announced its contribution to a groundbreaking report commissioned by the NATO Strategic Communications Centre of Excellence (NATO StratCom COE). The report, released on February 11, 2026, examines the evolving landscape of AI-enabled influence operations and disinformation campaigns across social media platforms.
The newly published study, titled “Social Media Manipulation for Sale, 2025 Experiment on Platform Capabilities to Detect and Counter Inauthentic Social Media Engagement,” reveals concerning findings about the commercial market for fake social media engagement. Cyabra’s research illuminates what experts are calling a “new frontier” in AI-generated bot networks that can convincingly mimic authentic human behavior online.
NATO StratCom COE, a multinational organization dedicated to strengthening the Alliance’s strategic communications capabilities, selected Cyabra for the research based on the company’s specialized technology and expertise in detecting coordinated inauthentic behavior. The collaboration underscores Cyabra’s growing prominence in global efforts to combat disinformation.
The report’s conclusions paint a troubling picture: manipulation of online discourse remains relatively easy to execute yet increasingly difficult to detect and prevent. This reality poses direct threats to democratic institutions, geopolitical stability, and public trust in digital information. As artificial intelligence technology becomes more sophisticated, the cost of generating credible fake personas has dropped dramatically, allowing hostile actors to deploy influence operations with greater speed, persuasiveness, and stealth.
According to Cyabra’s analysis, the disinformation landscape has undergone a critical shift from traditional high-volume bot activity to more sophisticated, AI-enabled operations. Modern inauthentic campaigns now rely heavily on context-aware, multilingual content generated at scale using artificial intelligence, including both text and visuals that match the tone of targeted discussions.
The report also identifies a move toward lower-volume, distributed activity that reduces detectable coordination signals – making these operations harder to spot using conventional methods. Rather than operating in isolated spam networks, these sophisticated bots strategically insert themselves into high-visibility threads, particularly under posts by influencers, journalists, and public figures. This represents a shift toward what Cyabra terms “in-conversation influence.”
Perhaps most concerning is the evolution toward more organic network patterns, with fake accounts not just interacting with each other but engaging with authentic users and communities. This blending makes traditional detection methods significantly less effective.
Dan Brahmy, CEO of Cyabra, expressed pride in the company’s role, stating, “We are honored to have been commissioned by NATO StratCom COE to provide the analytical framework for this year’s investigation. Cyabra remains committed to equipping public and private sector leaders with the tools necessary to uncover these threats and protect the integrity of the information ecosystem.”
Dr. Gundars Bergmanis-Korats, AI Laboratory Chief at NATO Strategic Communications Centre of Excellence, emphasized the significance of Cyabra’s contribution: “This report underscores the need to prioritize cross-platform behavioral detection by identifying synchronized patterns in timing, tone, and relational dynamics, as these increasingly indicate sophisticated, AI-enabled manipulation. Cyabra’s research and analytical support were instrumental in helping us test these dynamics at scale and translate complex platform behavior into actionable insights.”
The full report is now available for download from the NATO StratCom COE website.
In corporate developments, Cyabra has entered into a business combination agreement with Trailblazer Merger Corporation I (NASDAQ: TBMC), a special-purpose acquisition company. This move signals potential expansion of Cyabra’s capabilities as threats in the information ecosystem continue to evolve.
Cyabra has established itself as a key player in restoring trust and authenticity in digital spaces, providing solutions that help global enterprises and governments analyze actors, behaviors, and content. The company specializes in translating evidence of online manipulation into clear mitigation strategies at scale, enabling institutions to respond effectively to coordinated disinformation campaigns.
As AI technologies continue to advance, the findings in this NATO-commissioned report highlight the urgent need for more sophisticated detection methods and cross-platform cooperation to preserve the integrity of online discourse in democratic societies.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
This is an important initiative by NATO to stay ahead of the curve on AI-driven disinformation. Monitoring and countering social media manipulation is crucial for preserving democratic discourse.
Curious to learn more about the findings of this 2026 report on AI-driven disinformation campaigns. Glad to see NATO taking a comprehensive approach to this challenge.
Same here. Understanding the evolving commercial market for fake social engagement will be crucial for developing effective countermeasures.
Interesting to see Cyabra’s specialized technology being leveraged for this NATO report. Their expertise in identifying coordinated inauthentic behavior will be invaluable.
As AI capabilities continue to advance, the risks of social media manipulation will only increase. This NATO initiative is a timely and necessary response.
Glad to see NATO taking a proactive stance on this issue. Detecting and exposing AI-generated fake activity is essential for preserving the integrity of online discourse.
This collaboration underscores the growing threat of AI-enabled social media manipulation. Glad to see NATO taking it seriously and partnering with experts like Cyabra.
It’s good to see NATO taking a proactive approach to this challenge. Detecting and exposing AI-generated fake engagement will be key to maintaining online authenticity and trust.
Agreed. The proliferation of AI bots mimicking real users is a worrying trend that needs to be addressed.