Listen to the article
Pentagon Cuts Ties with Anthropic Over AI Military Use Dispute
A major confrontation over artificial intelligence in military applications has erupted as Defense Secretary Pete Hegseth terminated the Pentagon’s work with AI company Anthropic, citing national security concerns. The Trump administration took the unprecedented step of designating the San Francisco-based firm as a supply chain risk, a measure typically reserved for foreign entities with ties to adversarial nations.
The dispute centers on Anthropic CEO Dario Amodei’s refusal to allow the company’s Claude AI system to be used for mass surveillance or autonomous armed drones. President Donald Trump publicly criticized the company on social media, declaring, “We don’t need it, we don’t want it, and will not do business with them again!”
Anthropic has vowed to challenge the designation in court, calling it legally unsound and noting it has “never before publicly been applied to an American company.” The company says it has yet to receive formal notification of the action.
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic stated. “We will challenge any supply chain risk designation in court.”
The Pentagon’s move will end Anthropic’s contract worth up to $200 million and prohibit defense contractors from doing business with the company for military purposes. Trump has ordered most government agencies to immediately stop using Anthropic’s technology, though he gave the Pentagon a six-month transition period.
This high-stakes dispute comes at a pivotal moment for the rapidly evolving AI industry and could significantly impact how military applications of AI are governed. It also reveals deep divisions among leading AI companies about how to balance commercial opportunities with ethical concerns.
Within hours of the Pentagon’s action against Anthropic, rival OpenAI announced a new partnership with the Defense Department to provide AI for classified military networks. OpenAI CEO Sam Altman told employees in an internal memo that the company had secured the same restrictions that were sticking points in Anthropic’s negotiations.
“We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines,” Altman wrote, suggesting OpenAI could “de-escalate things” while maintaining safety protections.
The turn of events marks a significant win for OpenAI, which recently secured a $110 billion investment valuing the company at $730 billion. It also likely deepens the rivalry between Altman and Amodei, who left OpenAI in 2021 partly over AI safety concerns before founding Anthropic.
For Anthropic, the designation creates uncertainty about its broader business prospects. The company projects $14 billion in revenue this year, with more than 500 customers paying at least $1 million annually for its Claude AI system. While Anthropic insists the Pentagon’s designation only affects defense-related work, customers may hesitate to use its technology for fear of political repercussions.
In a CBS News interview scheduled to air Sunday, Amodei framed the dispute as a matter of principle: “Disagreeing with the government is the most American thing in the world. And we are patriots. In everything we have done here, we have stood up for the values of this country.”
The dispute could also create opportunities for other AI developers. Elon Musk’s Grok, despite facing criticism over safety and reliability issues, is expected to gain access to classified military networks. Google, with its Gemini technology, could also compete for Pentagon contracts, though it faces internal pressure from employees concerned about military applications.
Industry analysts suggest this confrontation may ultimately shape the ethical boundaries for AI development in defense applications while testing whether companies can maintain principled positions in the face of government pressure and commercial opportunities. It also raises questions about how the U.S. will balance AI innovation with safety concerns as it competes globally in this transformative technology.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


12 Comments
This clash raises important questions about the appropriate role of AI in military and national security applications. I’m curious to hear more from both sides about their perspectives and proposed solutions.
Absolutely, this is a complex issue that deserves careful consideration. I hope both the Pentagon and Anthropic can find a path forward that upholds democratic principles and human rights.
The Trump administration’s heavy-handed tactics against Anthropic are concerning. Designating a US company as a security risk sets a troubling precedent. I hope Anthropic is successful in their legal challenge.
Agreed, this seems like an overreaction by the Pentagon. Anthropic’s stance on not enabling certain military AI use is reasonable and should be respected.
The Pentagon’s decision to cut ties with Anthropic is concerning. Banning a US company over its ethical AI policies sets a dangerous precedent. I hope this can be resolved through constructive dialogue rather than escalation.
Agreed, this situation warrants close scrutiny. I’m curious to learn more about Anthropic’s proposed alternatives and how the Pentagon can address its needs while respecting ethical boundaries around AI use.
The Pentagon’s heavy-handed response raises questions about their commitment to responsible AI governance. Anthropic seems to be taking a principled stand – I’m curious to hear more about their reasoning and proposed alternatives.
Anthropic’s stance on not enabling mass surveillance or autonomous weapons is admirable. I hope they can find a productive path forward with the Pentagon that upholds democratic values.
This dispute highlights the complex tradeoffs involved in balancing national security needs and ethical AI development. I’ll be following this story closely to see how it unfolds.
Agreed, it’s a nuanced issue without easy answers. I hope both sides can find a mutually agreeable solution that preserves Anthropic’s principles while still meeting the Pentagon’s requirements.
This is a concerning situation. It’s critical to have clear ethical boundaries around AI use, even for national security. I hope both sides can find a reasonable compromise that respects human rights and democratic values.
Agreed. Ethical AI development is essential, especially for sensitive military applications. Striking the right balance is challenging but important.