Listen to the article
NATO Study Finds U.S. Senators’ Social Media Accounts Vulnerable to Manipulation
U.S. senators’ verified social media accounts remain highly susceptible to artificial manipulation through fake engagement, according to a recent investigation by the NATO Strategic Communications Centre of Excellence. The findings reveal significant vulnerabilities in platform security despite increased scrutiny leading up to the U.S. presidential election.
The NATO-accredited research group, based in Riga, Latvia, conducted an experiment where they paid three Russian companies just 300 euros ($368) to purchase nearly 338,000 fake likes, views, and shares across Facebook, Instagram, Twitter, YouTube, and TikTok. The experiment specifically targeted content from the verified accounts of Republican Senator Chuck Grassley of Iowa and Democratic Senator Chris Murphy of Connecticut, both of whom agreed to participate in the study.
Senator Murphy emphasized the importance of understanding these vulnerabilities, stating, “We’ve seen how easy it is for foreign adversaries to use social media as a tool to manipulate election campaigns and stoke political unrest.” He criticized social media companies for insufficient efforts to combat misinformation, adding that “more needs to be done to prevent abuse.”
The investigation revealed alarming persistence in the fake engagement, with over 98% of fabricated interactions still active four weeks after purchase. Furthermore, 97% of the accounts reported for inauthentic activity remained active five days after being flagged.
Janis Sarts, director of NATO StratCom, told The Associated Press that widespread social media manipulation represents not just a commercial concern but a genuine national security threat. “These inauthentic accounts are hired to trick algorithms into thinking divisive information is popular, thus reaching more people and deepening societal divisions, ultimately weakening us as a society,” Sarts explained.
This investigation follows a similar exercise conducted in 2019 focusing on European officials’ accounts. Researchers noted some improvements since then, with Twitter now removing inauthentic content more rapidly and Facebook making it harder to create fake accounts. This has forced manipulators to employ real people rather than bots, making such operations more expensive and less scalable.
A Facebook spokesperson responded to the findings by stating, “We’ve spent years strengthening our detection systems against fake engagement with a focus on stopping the accounts that have the potential to cause the most harm.” However, researchers identified YouTube, Instagram, and particularly TikTok as remaining highly vulnerable to manipulation.
Sebastian Bay, the report’s lead author, highlighted the disparity in platform security: “The level of resources they spend matters a lot to how vulnerable they are. It means you are unequally protected across social media platforms. It makes the case for regulation stronger. It’s as if you had cars with and without seatbelts.”
Researchers noted they intentionally promoted apolitical content during the experiment, including images of dogs and food, to avoid actual impact during the U.S. election season.
Ben Scott, executive director of Reset.tech, a London-based initiative combating digital threats to democracy, expressed dismay at the findings. “What’s most galling is the simplicity of manipulation,” Scott remarked. “Basic democratic principles of how societies make decisions get corrupted if you have organized manipulation that is this widespread and this easy to do.”
The social media platforms have defended their security measures. Twitter’s head of site integrity, Yoel Roth, stated the study “reflects the immense effort that Twitter has made to improve the health of the public conversation.” YouTube cited its removal of over 2 million videos in the third quarter of 2020 for violating spam policies, while TikTok emphasized its “zero tolerance toward inauthentic behavior” and ongoing investments in third-party testing and automated technology.
The investigation underscores the persistent challenges facing social media companies in the battle against artificial manipulation, raising serious concerns about the integrity of online political discourse as platforms continue to serve as crucial arenas for public debate.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
The NATO study is a wake-up call. If even Senators’ accounts can be targeted, the threat to regular users and smaller political voices is even more severe. Urgent action is needed to shore up platform security.
This report highlights the ongoing threat of social media manipulation, even targeting verified accounts of U.S. Senators. It’s a concerning vulnerability that platforms need to address more effectively.
The findings are troubling but not surprising. Foreign adversaries have weaponized social media for years, sowing discord and undermining democratic processes. Robust action is needed to secure these platforms.
Absolutely. Senators’ accounts being vulnerable is deeply concerning. Social media companies need to step up and implement much stronger security measures.
Alarming that verified accounts of U.S. Senators remain so vulnerable to manipulation. Social media platforms have an obligation to do far more to combat coordinated disinformation campaigns.
Dismaying to see how easily fake engagements can be purchased, even for verified accounts of U.S. Senators. Social media companies need far more robust safeguards to protect the integrity of political discourse.
Glad to see the NATO group investigating this issue and calling attention to the ease with which fake engagements can be purchased. Social media security and integrity need to be a top priority.
Agreed. Verified accounts should have much tighter safeguards to prevent outside interference and manipulation. Platforms must do more to protect the integrity of online discourse.
This is an important investigation that underscores the ongoing challenges around social media manipulation and platform security. Addressing these vulnerabilities must be a top priority.