Listen to the article

0:00
0:00

In June, the secure Signal account of a European foreign minister received an urgent text message purportedly from U.S. Secretary of State Marco Rubio. Soon after, two other foreign ministers, a U.S. governor, and a member of Congress received similar messages, some accompanied by sophisticated voice memos mimicking Rubio’s voice. Though convincing, these communications were AI-generated deepfakes crafted by unknown actors, potentially designed to disrupt American diplomacy or extract intelligence from U.S. allies.

This incident represents just one example in a growing trend of AI-powered information warfare. In August, researchers at Vanderbilt University discovered that Chinese tech firm GoLaxy had used AI to build detailed data profiles of at least 117 sitting U.S. lawmakers and over 2,000 American public figures. These profiles could enable the creation of AI personas mimicking these individuals and facilitate targeted messaging campaigns tailored to their followers’ psychological traits—a capability already tested in Hong Kong and Taiwan.

While disinformation isn’t new, artificial intelligence has dramatically reduced the barriers to entry, allowing malicious actors to conduct sophisticated influence operations cheaply and at massive scale. Despite this escalating threat, the Trump administration has been dismantling U.S. defenses against foreign disinformation, leaving the country vulnerable to AI-powered attacks that could progressively undermine public trust in democratic institutions.

For years, many democracy advocates viewed the free flow of information as inherently positive. President Barack Obama expressed this sentiment in 2009, stating that “the more freely information flows, the stronger the society becomes.” However, social media has both accelerated information dissemination and created echo chambers through personalized content algorithms, deepening polarization and undermining public trust in institutions.

Only recently has the world recognized the urgency of digital information threats. French President Emmanuel Macron highlighted this concern in October, criticizing Europe’s “incredibly naive” approach of entrusting its “democratic space to social networks that are controlled either by large American entrepreneurs or large Chinese companies.” Political scientist Francis Fukuyama has described today’s online public space as “an ecosystem that rewards sensationalism and disruptive content.”

AI has transformed information warfare from a game of tracking obvious battleships—state-controlled media outlets like China’s CGTN or Russia’s RT—to a battle of autonomous drones: hyperpersonalized, adaptive, and accessible to various actors. Today’s propaganda campaigns, no longer constrained by human labor limitations, can be waged with unprecedented speed and sophistication, potentially paralyzing government decision-making and fracturing social cohesion.

The global landscape already reveals AI’s impact on information operations. In El Salvador, President Nayib Bukele combines state propaganda with AI-powered tools, including bot networks, to counter international criticism of democratic backsliding. OpenAI recently uncovered a Chinese operation called “Uncle Spam” that used AI to create fake personas posting polarizing content across U.S. political divides, while also gathering intelligence by scraping vast amounts of personal data from social media platforms.

The real-world consequences are increasingly visible. In India, AI-generated content has spread anti-Muslim messaging, exacerbating interreligious tensions. In war-torn Sudan, AI voice cloning has been used to impersonate former leader Omar al-Bashir, eroding trust in official information sources. Perhaps most alarming was Russia’s interference in Romania’s 2024 presidential election, where a massive disinformation campaign involving deepfakes and AI-powered bots artificially boosted a pro-Russian candidate, ultimately forcing the constitutional court to annul the election results.

Despite these escalating threats, the United States has weakened its defenses. Beginning in 2016, the U.S. government had strengthened its capabilities against foreign propaganda by establishing the Global Engagement Center (GEC) within the State Department. The Biden administration built on these efforts, with the GEC successfully exposing Russia’s information operations in Africa and Latin America, including the Kenya-based Russian-funded “African Stream” platform.

However, the second Trump administration has cut or severely weakened crucial government offices responsible for countering foreign influence operations, including the GEC, the Director of National Intelligence’s Foreign Malign Influence Center, the FBI’s Foreign Influence Task Force, and parts of the Cybersecurity and Infrastructure Security Agency. This dismantling constitutes a dangerous unilateral disarmament at a critical moment.

Effectively countering these threats requires both technological innovation and institutional restructuring. The Trump administration should issue a national security directive declaring AI-amplified foreign influence a clear danger, mobilize the intelligence community to assess adversaries’ capabilities, and establish a permanent interagency structure led by the National Security Council to coordinate governmental resources against information warfare.

Public-private partnership will be essential, with the White House Office of Science and Technology Policy facilitating collaboration between government, social media platforms, AI research labs, and cybersecurity firms to develop technologies for detecting AI-generated content and establishing industry-wide best practices.

With the 2026 U.S. midterm elections approaching, the time to act is now. Without reinforced defenses against increasingly sophisticated influence campaigns, America’s democratic foundations could face a dangerous erosion of public trust and institutional legitimacy.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

16 Comments

  1. Oliver E. Williams on

    The creation of detailed data profiles on public figures is especially worrying. This could enable highly targeted and personalized disinformation campaigns that are extremely difficult to detect and counter.

    • Absolutely. We need robust privacy protections and oversight to prevent this kind of data exploitation, even if it’s done under the guise of ‘marketing’ or ‘analytics’.

  2. Isabella E. Jackson on

    This is a sobering reminder of the potential for misuse of powerful AI technologies. We must redouble our efforts to develop robust ethical frameworks and governance mechanisms to ensure these tools are not exploited for malicious ends.

    • James Thompson on

      Well said. Responsible innovation and proactive risk mitigation should be top priorities as AI continues to advance. The stakes for getting this wrong are simply too high.

  3. Jennifer Moore on

    The use of AI to create detailed profiles of public figures is particularly concerning. This could enable the creation of highly personalized and convincing deepfakes that could significantly undermine public trust.

    • Absolutely. We need to find ways to protect the privacy and digital identities of individuals, especially those in the public eye, to prevent these kinds of exploitative practices.

  4. Isabella Moore on

    This is a troubling development that highlights the need for greater regulation and oversight of AI technologies. Without proper safeguards, the potential for abuse is immense and the consequences for our democracy could be severe.

    • You raise a valid point. Policymakers and tech leaders need to work together to establish clear guidelines and accountability measures to ensure AI is used responsibly and for the public good.

  5. Oliver Jackson on

    I’m curious to know more about the technical capabilities that enable this kind of AI-driven disinformation. What advancements in machine learning and natural language processing are fueling these threats?

    • Patricia Davis on

      That’s a great question. Understanding the underlying technologies is key to developing effective countermeasures. I imagine advances in generative adversarial networks and voice cloning are playing a major role.

  6. While disinformation is not new, the scale and sophistication of these AI-powered campaigns is truly alarming. We need a concerted, multi-stakeholder effort to address this challenge before it spirals further out of control.

    • Jennifer Martinez on

      Agreed. Collaboration between governments, tech companies, media outlets, and civil society will be essential to staying ahead of these evolving threats to our information ecosystem.

  7. This is a concerning development. AI-powered disinformation campaigns have the potential to sow serious discord and undermine trust in institutions. Rigorous fact-checking and media literacy are key to combating this threat.

    • Elizabeth Miller on

      You’re right, the ease with which these AI-generated deepfakes can be created is alarming. Effective regulations and technological safeguards will be crucial to staying ahead of malicious actors.

  8. As an AI researcher, I’m deeply concerned by the findings in this report. The ability to generate highly personalized disinformation campaigns at scale is a serious threat that demands urgent attention and action.

    • Olivia Jackson on

      I agree. We need to invest in developing robust countermeasures, such as advanced anomaly detection and deepfake identification techniques, to stay ahead of these evolving threats.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.