Listen to the article

0:00
0:00

Social media platforms have largely failed to stop commercially available fake engagement, according to a recent NATO experiment that exposed significant vulnerabilities in platform defenses against coordinated manipulation.

The NATO Strategic Communications Centre of Excellence conducted extensive testing between September and November 2025, purchasing over 100,000 units of fake engagement from more than 30,000 inauthentic accounts across seven major platforms. The entire operation cost just 252 euros, highlighting how inexpensively social media manipulation can be conducted.

Four weeks after purchase, most platforms had failed to remove the majority of fake accounts and engagement. VKontakte performed best in account removal, eliminating 96% of identified fake profiles, though engagement from deleted accounts remained visible. X (formerly Twitter) removed 82% of fake accounts, while YouTube and Bluesky each removed 55%. Facebook removed 39%, Instagram 22%, and TikTok showed the weakest performance, removing only 4% of identified fake accounts.

When it came to removing fake engagement itself, X led the pack by eliminating 57% of purchased fake activity. YouTube removed 44%, while VKontakte eliminated 30% and Facebook 21%. TikTok and Instagram removed 17% and 16% respectively. Bluesky showed the poorest performance, removing none of the fake engagement purchased during the experiment.

The cost of manipulation varied significantly by platform. For a standardized package of 100 likes, 100 comments, 1,000 views, and 100 followers, Bluesky proved the cheapest at approximately 1.41 euros. Instagram cost around 1.73 euros, while X was the most expensive at approximately 12.08 euros. However, X views were exceptionally cheap, with researchers obtaining 156,083 views for just 10 euros.

The experiment also assessed platforms’ responsiveness to user reports of fake accounts. Facebook showed the highest response rate, removing 25% of reported fake accounts five weeks after reporting. Bluesky removed 23%, while YouTube and TikTok removed approximately 8% and 7% respectively. Instagram removed about 5%, and both VKontakte and X removed only about 2% of reported accounts.

In terms of transparency, TikTok was the only platform to engage directly with the experiment findings and publish detailed enforcement data during the reporting period. Meta reported removal actions for Facebook but provided no equivalent reporting for Instagram. YouTube and Bluesky published only partial annual figures, while X and VKontakte published no transparency or enforcement updates during the experimental period.

The experiment also revealed a concerning ability to purchase ready-to-use advertising accounts for Meta platforms, TikTok, and YouTube. These accounts, which cost significantly more than standard inauthentic accounts, enabled manipulation through platform advertising systems. YouTube advertising-ready accounts cost 12.93 euros compared to just 0.067 euros for standard accounts.

Financial analysis conducted by Latvia’s Financial Intelligence Unit identified substantial revenue flowing to manipulation service providers. One Russia-based provider received approximately $265,261 between September 2023 and October 2025, while a UK-based provider processed approximately $123,714 during the same period.

The investigation highlighted potential sanctions compliance concerns for Russia-based operators using major cryptocurrency exchanges. EU regulations prohibit providing crypto-asset services to Russian nationals, persons residing in Russia, or entities established in Russia.

Notably, the experiment identified a shift in bot-promoted content from political matters toward military themes following the 2024 election year. Facebook showed the highest volume of bot activity amplifying pro-China military content, while X demonstrated substantial bot amplification of posts portraying the Chinese military as superior to the United States.

Sophisticated AI-generated content posed additional challenges for platform detection systems. Automated workflows generated text through ChatGPT, created images and videos through Freepik’s API, and published content without human intervention. For just 10 euros, services generated between 40 and 2,500 pieces of AI content depending on type and quality settings.

The findings come as platforms face increasing regulatory scrutiny under the Digital Services Act, which became fully operational in February 2024. The European Commission found TikTok and Meta in breach of transparency obligations in October 2025.

With bot fraud increasing 101% year-over-year in 2024 and 16% stemming from bots linked to AI tools, the experiment underscores the ongoing challenges platforms face in combating increasingly sophisticated manipulation techniques despite years of promises to address the issue.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

6 Comments

  1. Oliver Miller on

    Yikes, these results are quite alarming. The fact that platforms like TikTok and Facebook were able to remove so few of the fake accounts is really disappointing. I guess it shows how profitable and easy this kind of bot activity is for bad actors. What do you think the solution needs to be?

    • Patricia N. Williams on

      I agree, the low removal rates are very worrying. I think platforms need to invest heavily in more advanced detection algorithms, user verification, and real-time monitoring to stay ahead of these tactics. But it’s clearly an ongoing arms race that they’re struggling to win so far.

  2. Amelia W. Thompson on

    This is a really eye-opening test by NATO. I’m surprised how little it cost to purchase so much fake engagement across these platforms. It highlights how pervasive and lucrative this kind of activity is. I wonder what the long-term implications are for the credibility of social media content.

    • That’s a great point. If platforms can’t effectively detect and remove this type of coordinated manipulation, it really undermines trust in anything shared on those platforms. It’s a major challenge they need to urgently address.

  3. Oliver Johnson on

    Wow, this is really concerning to hear. It’s shocking that these major platforms are still struggling to combat cheap, coordinated bot activity despite all their claims about advanced detection capabilities. I wonder what the underlying issues are – lack of investment, technical limitations, or something else?

    • Amelia Davis on

      You’re right, it’s a major vulnerability that platforms haven’t been able to solve. I hope they take this NATO report seriously and invest more in robust defenses against manipulation.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.