Listen to the article

0:00
0:00

A network of automated social media accounts significantly influenced public discourse during the 2016 Brexit referendum, according to groundbreaking research by Dr. Marco Bastos. The study revealed that 13,493 Twitter accounts posted content in the crucial four-week period surrounding the vote, only to vanish once polling stations closed.

The investigation identified a sophisticated “Twitterbot” network designed to amplify certain electoral messages and potentially manipulate public opinion. These automated accounts created rapid cascades of tweets that spread through the platform during the campaign period.

Dr. Bastos and his team at City St George’s analyzed user activity metrics and temporal posting patterns to distinguish these bot accounts from genuine human users. Their methodology revealed how the botnet was organized into specialized subnetworks primarily focused on “echoing” content through retweets generated by both algorithms and human operators.

The research demonstrated how these ideologically polarized “echo chambers” on Twitter corresponded to geographically situated social networks. This correlation enabled the researchers to develop an algorithm capable of identifying both user location and political affiliation with significant accuracy.

In subsequent studies, the team discovered that approximately one-third of the messages posted leading up to the referendum were eventually removed from the platform. Only about half of the most active accounts during the referendum period continue to operate publicly, with Twitter having suspended roughly 20 percent of them.

Perhaps most notably, the researchers found that removed content disproportionately favored the Leave campaign. In fact, more pro-Leave messages have disappeared than the entire volume of tweets associated with the Remain campaign.

This research has generated substantial impact across multiple domains. Parliamentary inquiries into social media’s role in spreading disinformation have directly referenced the findings. Dr. Bastos provided oral evidence to the House of Commons Digital, Culture, Media and Sport Select Committee, helping to shape the government’s Online Harms White Paper, which established a “duty of care” for social media providers.

Security organizations have also utilized the research to better understand the threats posed by automated networks to democratic processes. The work was cited in parliamentary debates and referenced in a House of Commons Briefing Paper titled “National Security and Russia.” Dr. Bastos shared insights at a joint expert roundtable discussion hosted by the Royal United Services Institute for Defence Studies and the UK Cabinet Office. His expertise has informed NATO’s Strategic Communications Centre of Excellence and the UK Foreign & Commonwealth Office.

Beyond government and security circles, the research has brought critical issues into public awareness by exposing how social media platforms can be manipulated. The findings received widespread media coverage, raising public consciousness about the vulnerability of democratic processes to digital interference.

As social media continues to evolve as a primary channel for political discourse, this research stands as a crucial reference point for understanding how automated systems can be deployed to influence public opinion during critical democratic events. The findings have contributed significantly to ongoing debates about platform accountability and the need for more robust regulatory frameworks to safeguard electoral integrity in the digital age.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

14 Comments

  1. The findings on the sophisticated botnet designed to spread targeted content during the Brexit referendum are quite alarming. It’s crucial that we better understand and address these types of coordinated disinformation campaigns.

    • Lucas K. Smith on

      I agree, this research sheds light on the scale and organization of these bot networks. Policymakers need to take action to combat this threat to democratic discourse.

  2. Interesting study on how bots can manipulate online discourse around major political events. It’s concerning to see how these networks can be used to amplify certain narratives and potentially influence public opinion.

    • Emma O. Miller on

      You’re right, this is a serious issue that highlights the need for better regulation and transparency around social media platforms and political messaging.

  3. Elizabeth White on

    The researchers’ development of an algorithm to detect these bot-driven echo chambers is a significant advancement. Hopefully, this tool can be used to help curb the spread of disinformation on social media platforms.

    • Elijah F. Williams on

      That’s a great point. If such detection tools can be widely adopted, it could go a long way in addressing the challenges posed by coordinated bot networks and their manipulation of online discourse.

  4. Patricia Smith on

    The researchers’ methodology for identifying the specialized subnetworks of bots is quite impressive. It’s concerning to see how these automated accounts can create echo chambers and potentially sway public opinion.

    • Elizabeth Lopez on

      Yes, the ability to correlate the bot activity with geographically situated social networks is a significant finding. This underscores the need for a multi-faceted approach to addressing this issue.

  5. This study provides valuable insights into the scale and sophistication of social media bot networks. It’s a sobering reminder of the potential threats to democratic processes posed by coordinated disinformation campaigns.

    • Agreed. Policymakers and tech companies must work together to develop effective strategies to identify and mitigate the impact of these bot-driven influence operations.

  6. This study highlights the power of social media bots to distort and manipulate public debate around major political events. It’s a stark reminder of the need for greater transparency and accountability in online platforms.

    • Absolutely. Detecting and disrupting these coordinated bot campaigns should be a top priority for social media companies and regulators.

  7. Jennifer Garcia on

    This study underscores the urgent need for greater transparency and accountability in social media platforms. The ability of bots to amplify certain narratives and potentially sway public opinion is a threat to democratic processes.

    • Absolutely. Policymakers and tech companies must work together to develop robust solutions to identify and mitigate the impact of these coordinated disinformation campaigns.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.