Listen to the article
Pentagon Official Reveals AI Dispute with Anthropic Tied to Trump’s Space Weapons Program
A senior Pentagon official has disclosed that the recent clash with AI company Anthropic over its technology’s use in autonomous weapons stemmed from disagreements about the future role of artificial intelligence in President Donald Trump’s “Golden Dome” missile defense initiative.
Defense Undersecretary Emil Michael, who serves as the Pentagon’s chief technology officer, explained that Anthropic’s ethical restrictions on its Claude chatbot became problematic as the military pursues greater autonomy for weapons systems to keep pace with competitors like China.
“I need a reliable, steady partner that gives me something, that’ll work with me on autonomous, because someday it’ll be real and we’re starting to see earlier versions of that,” Michael said during a recent appearance on the “All-In” podcast. “I need someone who’s not going to wig out in the middle.”
The dispute escalated last week when the Pentagon formally designated San Francisco-based Anthropic as a supply chain risk, effectively cutting off the company’s defense work. The move employed regulations typically used to prevent foreign adversaries from compromising national security systems.
Anthropic has announced plans to sue over the designation, which impacts its partnerships with other military contractors. Additionally, President Trump has ordered federal agencies to immediately cease using Claude, though the Pentagon received a six-month grace period to phase out the technology from classified military systems, including those supporting operations related to Iran.
At the center of the disagreement are Anthropic’s attempts to place limitations on how its technology could be deployed. The company maintained it only sought to restrict Claude from two specific applications: mass surveillance of American citizens and fully autonomous weapons systems.
Michael, a former Uber executive, detailed months of negotiations with Anthropic CEO Dario Amodei during the podcast conversation with Silicon Valley venture capitalists. The podcast is co-hosted by Jason Calacanis, David Friedberg, and Chamath Palihapitiya. Notably absent was David Sacks, a former PayPal executive who now serves as Trump’s AI czar and has been openly critical of Anthropic, particularly for hiring former Biden administration officials after Trump’s return to the White House.
The tensions became public when Michael criticized Amodei on social media last week, claiming he “has a God-complex” and “wants nothing more than to try to personally control” the military. However, during the podcast, Michael framed the dispute within the broader context of the military’s evolving approach to AI implementation.
Michael emphasized that the Pentagon is developing protocols for varying levels of autonomy in warfare based on risk assessment. He specifically referenced the Golden Dome initiative, describing a hypothetical scenario where the U.S. would have only 90 seconds to respond to a Chinese hypersonic missile.
“A human anti-missile operator may not be able to discriminate with their own eyes what they’re going after,” Michael explained, adding that an autonomous counterattack would pose minimal risk “because it’s in space and you’re just trying to hit something that’s trying to get you.”
In response to Michael’s comments, Anthropic referenced an earlier statement from Amodei saying, “Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.”
Michael, who assumed oversight of the military’s “AI portfolio” in August, said he began scrutinizing Anthropic’s contracts, some of which originated during the Biden administration. He questioned terms of use that he considered overly restrictive.
“I need to have the terms of service be rational relative to our mission set,” he stated. The negotiations dragged on for three months, with Michael providing various scenarios that might require AI support. “They’re like, ‘OK, we’ll give you an exception for that.’ Well, how about this drone swarm? ‘We’ll give an exception for that.’ And I was like, exceptions doesn’t work. I can’t predict for the next 20 years what are all the things we might use AI for.”
The Pentagon subsequently began requiring AI companies to allow “all lawful use” of their technology. While competitors including Google, OpenAI, and Elon Musk’s xAI have agreed to these terms, Anthropic resisted, arguing that current AI systems “are simply not reliable enough to power fully autonomous weapons.”
Anthropic has disputed aspects of Michael’s characterization of their negotiations and maintained that its requested protections were narrowly defined and not based on existing Claude applications. The dispute now appears headed for resolution in court.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
This clash over autonomous warfare capabilities is an interesting development in the evolving landscape of military AI. It highlights the complex ethical and practical considerations at play as the Pentagon seeks more advanced autonomous systems.
The Pentagon’s need for a ‘reliable, steady partner’ on this technology is understandable, but Anthropic’s ethical restrictions seem to be a point of contention.
This conflict raises important questions about the appropriate role of AI in military applications. The Pentagon’s designation of Anthropic as a supply chain risk suggests they are willing to take a hard line on this issue.
This dispute highlights the inherent tension between military priorities and the ethical concerns of AI companies. It will be an ongoing challenge to balance national security needs with responsible development of autonomous systems.
The designation of Anthropic as a supply chain risk is a strong move by the Pentagon, reflecting the importance they place on autonomous weapons development. It will be intriguing to see how this plays out and if other AI companies face similar issues.
Autonomous warfare is a complex and controversial topic. I’m curious to hear more about the specific points of disagreement between the Pentagon and Anthropic.
The Pentagon’s pursuit of autonomous weapons capabilities is understandable, but the ethical concerns raised by Anthropic are also valid. Finding the right balance between military needs and responsible AI development will be an ongoing challenge.
I’m curious to learn more about the specific ethical restrictions Anthropic has placed on its technology and how that has clashed with the Pentagon’s requirements.
The Pentagon’s push for greater autonomy in weapons systems is understandable given the perceived threats from competitors like China. However, Anthropic’s stance on the ethical implications of this technology is also valid and worth considering.
It will be interesting to see if other AI companies face similar clashes with the military over the use of their technology in autonomous weapons.
This dispute highlights the complex relationship between the military and the tech industry when it comes to the development of advanced technologies like autonomous weapons systems. It will be interesting to see how this situation evolves.