Listen to the article

0:00
0:00

U.S. Military Seeks AI Systems for Overseas Propaganda Operations

U.S. Special Operations Command is planning to harness artificial intelligence for international propaganda campaigns, according to documents reviewed by The Intercept. The military branch aims to develop machine learning systems that can “influence foreign target audiences” and “suppress dissenting arguments” without constant human oversight.

The documents, which outline SOCOM’s technology procurement plans for the next five to seven years, reveal the military’s interest in “agentic AI” and multi-large language model agent systems to significantly expand the scale of its influence operations.

“The information environment moves too fast for military [members] to adequately engage and influence an audience on the internet,” states the SOCOM document. “Having a program built to support our objectives can enable us to control narratives and influence audiences in real time.”

SOCOM spokesperson Dan Lessard confirmed to The Intercept that the command is pursuing “cutting-edge, AI-enabled capabilities” but emphasized they would operate under the Pentagon’s Responsible AI framework with human oversight. He stated that these operations “do not target the American public and are designed to support national security objectives.”

While U.S. law and Pentagon policy generally prohibit military propaganda campaigns from targeting domestic audiences, the borderless nature of the internet makes such distinctions increasingly difficult to enforce.

Tools like ChatGPT and Google’s Gemini could revolutionize propaganda efforts by generating persuasive content in various tones without the time or expense of human writers. The SOCOM document specifically requests “automated systems to scrape the information environment, analyze the situation and respond with messages that are in line with MISO [Military Information Support Operations] objectives.”

The military also wants systems that can “access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages” and create “more targeted message[s] to influence that specific individual or group.”

The document further outlines plans to use AI to simulate how propaganda will be received, creating “comprehensive models of entire societies to enable MISO planners to use these models to experiment or test various multiple scenarios.”

This push comes amid growing concerns about similar technologies being deployed by foreign powers. A recent New York Times report highlighted China’s GoLaxy software, which allegedly can craft responses reinforcing Chinese government views while countering opposing arguments. However, the SOCOM document suggests the Pentagon is seeking capabilities that closely match these reported Chinese systems.

The military has previously engaged in questionable information operations. In 2024, Reuters revealed the Defense Department had operated a clandestine anti-vax social media campaign to undercut public confidence in Chinese Covid vaccines, describing them as “fake” despite World Health Organization approval as “safe and effective.”

William Marcellino, a RAND Corporation behavioral scientist, told The Intercept such systems are being built out of necessity, arguing that “regimes like those from China and Russia are engaged in AI-enabled, at-scale malign influence efforts.” He believes countering those campaigns likely requires AI-scale responses.

Critics warn against this approach. Heidy Khlaaf, chief scientist at the AI Now Institute and former OpenAI safety engineer, cautioned that “framing the use of generative and agentic AI as merely a mitigation to adversaries’ use is a misrepresentation,” noting that offensive and defensive uses are “two sides of the same coin.”

The effectiveness of AI-powered propaganda remains questionable. Emerson Brooking, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab and former Pentagon cybersecurity adviser, pointed to previously failed U.S. online influence campaigns, including a network of Pentagon-operated social media accounts revealed in 2022 that spread anti-Russian and Iranian content but gained little traction.

“We know that these efforts have not worked very well and can be deeply embarrassing or counterproductive when revealed to the American public,” Brooking said. “AI tends to make these campaigns stupider, not more effective.”

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

16 Comments

  1. The military’s interest in using AI to ‘suppress dissenting arguments’ is highly concerning. Even with the responsible AI framework, the potential for these technologies to be misused and to undermine free speech is extremely alarming. Robust public oversight and debate is essential.

    • Linda C. Martin on

      I agree, this is a very worrying development. The military’s plans to leverage AI to ‘control narratives’ and ‘influence audiences’ in real-time without human oversight sets off major alarm bells. Transparency and accountability must be the top priorities.

  2. This is concerning. Leveraging AI to ‘control narratives’ and ‘suppress dissent’ raises major ethical issues around freedom of speech and democratic discourse. I hope any such programs would have robust safeguards and oversight to prevent abuse.

    • Agreed. AI-powered propaganda is a dangerous path that could undermine the open exchange of ideas. Responsible use of these technologies is crucial.

  3. Amelia Williams on

    This is a complex issue with significant implications for free expression and the information ecosystem. While the military’s goals may have some merit, the use of AI-powered propaganda and narrative control raises serious red flags that require careful consideration.

    • I agree, the stakes are very high here. The military’s plans need to be scrutinized thoroughly to ensure they don’t cross ethical lines or undermine democratic values. Responsible use of these technologies is essential.

  4. Oliver Thompson on

    The military’s interest in using AI to ‘influence foreign target audiences’ and ‘control narratives’ is deeply concerning. Even with oversight, the potential for abuse and the erosion of free speech and open discourse is alarming. This is an issue that requires robust public debate and scrutiny.

    • Well said. The use of AI-powered propaganda, even for ostensibly ‘responsible’ purposes, is a slippery slope that must be approached with great caution and transparency. The public deserves to know how these technologies are being developed and deployed.

  5. While I understand the military’s desire to counter ‘dissenting narratives’, using AI to ‘control’ information and ‘influence audiences’ is a dangerous path. The potential for abuse and the erosion of free speech is extremely worrying. Rigorous safeguards and public oversight are critical.

    • Absolutely. The military’s plans raise serious red flags around democratic principles and the free flow of information. Any use of AI-powered propaganda, no matter the stated intent, should be viewed with the utmost skepticism and scrutiny.

  6. Lucas Williams on

    The military’s interest in ‘agentic AI’ and ‘multi-large language model agent systems’ for overseas influence operations is quite troubling. Manipulating information environments and audiences in real-time without human oversight seems ripe for misuse.

    • Michael Martin on

      You’re right, this has major implications for truth and transparency. The responsible AI framework they mention needs to be airtight to prevent these capabilities from being abused.

  7. William K. Moore on

    While I understand the military’s desire to ‘control narratives’ and ‘influence audiences’, using AI-powered propaganda is a concerning development. We must be vigilant that these technologies do not erode democratic principles and open discourse.

    • Absolutely. Unfettered AI control over information and narratives is a slippery slope. Rigorous oversight and transparency are critical to ensuring these capabilities are not misused.

  8. The idea of the military using AI to ‘suppress dissenting arguments’ is deeply troubling. Even with human oversight, the potential for abuse and the erosion of free speech is alarming. I hope there are robust safeguards in place.

    • You raise a very valid point. Suppressing dissenting views, even with AI, is a dangerous precedent that goes against democratic principles. Transparency and accountability will be key.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.