Listen to the article

0:00
0:00

Foreign Influence Campaigns Intensify Ahead of 2024 US Presidential Election

As the 2024 U.S. presidential election approaches, foreign influence campaigns have emerged as a significant threat to the integrity of the democratic process. These large-scale operations, designed to shift public opinion, spread false narratives, and alter behaviors among American voters, have grown increasingly sophisticated through the exploitation of social media platforms and artificial intelligence.

Intelligence agencies and researchers have identified Russia, China, Iran, and Israel among the nations actively conducting influence operations targeting U.S. voters. These foreign actors utilize a complex ecosystem of social bots, paid influencers, media companies, and increasingly, generative AI tools to amplify their messages.

At Indiana University’s Observatory on Social Media, researchers have developed algorithms to detect what they call “inauthentic coordinated behavior.” These detection methods identify suspicious patterns such as clusters of social media accounts posting in synchronized fashion, amplifying the same groups of users, sharing identical content, or performing suspiciously similar sequences of actions.

“We’ve uncovered many examples of coordinated inauthentic behavior,” notes a researcher from the Observatory. “For instance, accounts that flood networks with tens or hundreds of thousands of posts in a single day, or orchestrated campaigns where one account posts a message and others controlled by the same operators rapidly like and unlike it hundreds of times to manipulate engagement algorithms.”

These manipulation tactics are designed to game the algorithms that determine what content becomes trending and appears in users’ feeds. Once objectives are achieved, these messages are often deleted to evade detection.

The rise of generative AI has significantly enhanced these operations. In one analysis of 1,420 fake Twitter (now X) accounts using AI-generated profile pictures, researchers found these accounts were spreading scams, spam, and coordinated messages. Researchers estimate that at least 10,000 such accounts were active daily on the platform before X CEO Elon Musk drastically reduced the platform’s trust and safety teams.

Another investigation identified a network of 1,140 bots using ChatGPT to generate humanlike content promoting fake news websites and cryptocurrency scams. These sophisticated bots not only posted machine-generated content but also engaged with each other and real users through replies and retweets. Alarmingly, current state-of-the-art AI content detection tools struggle to distinguish between these AI-enabled social bots and legitimate human accounts.

While measuring the precise impact of these campaigns remains challenging due to data collection limitations and ethical research constraints, researchers have developed simulation models to understand society’s vulnerability to different manipulation tactics. The “SimSoM” social media model, which simulates how information spreads through networks, has provided valuable insights into the effectiveness of various adversarial strategies.

The research identified three primary manipulation tactics: infiltration (creating believable interactions to gain followers from target communities), deception (posting engaging content likely to be reshared by exploiting emotional responses and political alignment), and flooding (posting high volumes of content).

Simulation results suggest that infiltration is the most effective tactic, reducing the average quality of content in a system by more than 50%. When combined with flooding the network with low-quality yet engaging content, the reduction in quality can reach 70%.

“Of particular concern is that generative AI models make it much easier and cheaper for malicious agents to create and manage believable accounts,” explains the Observatory researcher. “These tools enable non-stop interaction with humans and the creation of harmful but engaging content at scale.”

To counter these threats, social media platforms should strengthen content moderation efforts to identify and disrupt manipulation campaigns. Practical measures include making it more difficult to create fake accounts and post automatically, challenging high-frequency posters to verify their human identity, adding friction to resharing mechanisms, and educating users about AI-generated content.

Since open-source AI models make it possible for malicious actors to build their own generative tools, regulation should focus on content dissemination rather than generation. For instance, platforms could require creators to verify the accuracy or provenance of content before it reaches a large audience.

Such content moderation measures would protect rather than censor free speech in digital public squares. As researchers emphasize, the right to free speech is not a right to guaranteed exposure, and influence operations effectively function as a form of censorship by drowning out authentic voices and opinions.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. William Johnson on

    This is a worrying trend. We need strong safeguards to prevent foreign actors from exploiting social media to sway public opinion and influence elections.

  2. Isabella Thompson on

    I’m glad researchers are developing tools to identify suspicious social media activity. This is an important step in combating foreign disinformation campaigns.

    • Detecting coordinated inauthentic behavior is crucial. We must stay vigilant and not let bad actors undermine our democratic process.

  3. This is a concerning development. We must be vigilant against foreign interference in our democratic processes. Social media platforms need to do more to detect and remove coordinated influence campaigns.

    • Agreed. Increased transparency and accountability for social media companies is critical to safeguarding elections.

  4. Influence campaigns targeting U.S. voters are a serious threat to our democracy. I hope our government and tech companies can work together to address this issue effectively.

    • Jennifer F. Miller on

      Combating foreign interference in our elections is crucial for preserving the integrity of our democratic process.

  5. Manipulating public opinion through social media is a serious threat to democracy. I hope our government and tech companies can work together to combat these foreign influence operations.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.