Listen to the article
Microsoft, Former Military Leaders Challenge Pentagon’s Blacklisting of AI Firm Anthropic
Microsoft and a coalition of retired high-ranking military officials have joined forces to support AI company Anthropic in its legal battle against the Trump administration. The dispute centers on Defense Secretary Pete Hegseth’s recent designation of Anthropic as a national security risk, effectively barring the company from military contracts.
In a filing submitted Tuesday to a San Francisco federal court, Microsoft challenged the Pentagon’s decision, arguing that using a supply chain risk designation to address what amounts to a contract dispute could have severe economic consequences that don’t serve the public interest. The tech giant, which is a major government contractor itself, criticized the action as forcing contractors to comply with “vague and ill-defined directions that have never before been publicly wielded against a U.S. company.”
Separately, 22 former military leaders, including past secretaries of the Air Force, Army, and Navy, filed their own brief supporting Anthropic. The group, which includes former CIA director and retired Air Force General Michael Hayden and retired Coast Guard Admiral Thad Allen, characterized Hegseth’s actions as a misuse of government authority for “retribution against a private company that has displeased the leadership.”
The conflict originated from Anthropic’s refusal to permit unrestricted military use of its AI model Claude. The company maintained two ethical red lines, declining to allow its technology to be used for domestic mass surveillance or for initiating warfare without human control. These principles became a sticking point in negotiations when the Pentagon insisted on allowing “all lawful” uses of the AI.
“Microsoft also believes that American AI should not be used to conduct domestic mass surveillance or start a war without human control,” the company stated in its filing. “This position is consistent with the law and broadly supported by American society, as the government acknowledges.”
Following the dispute, President Trump publicly ordered all federal agencies to cease using Claude, and the Pentagon issued its formal designation of Anthropic as a security risk. Until this controversy, Anthropic had been the only major AI developer approved for use in classified military networks. Military officials have indicated they’re now looking to shift that work to competitors including Google, OpenAI, and Elon Musk’s xAI.
The retired military leaders warned in their filing that the “sudden uncertainty” created by targeting technology already embedded in military platforms could disrupt planning and endanger soldiers during ongoing operations. Their concerns come as U.S. forces are engaged in a new conflict with Iran, though their filing doesn’t explicitly mention this war.
U.S. Central Command’s current leader, Admiral Brad Cooper, confirmed in a Wednesday video about U.S. strikes on Iran that the military is using “advanced AI tools” to “sift through vast amounts of data in seconds,” though he didn’t specify which tools. Cooper emphasized that “humans will always make final decisions on what to shoot and what not to shoot and when to shoot.”
In addition to Microsoft and the former military leaders, Anthropic has received support from other quarters, including AI developers at Google and OpenAI, as well as organizations like the Cato Institute and the Electronic Frontier Foundation.
The case is being heard by U.S. District Judge Rita Lin, who was nominated to the bench by President Biden in 2022. Lin has scheduled a hearing for March 24. Anthropic has also filed a separate, narrower case in the federal appeals court in Washington, D.C.
Microsoft’s filing requests that the judge temporarily lift the designation to allow for more “reasoned discussion” between Anthropic and the Trump administration. The Pentagon has declined to comment on the matter, citing its policy of not remarking on ongoing litigation.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


18 Comments
The Pentagon’s actions against Anthropic seem heavy-handed, and I’m glad to see pushback from Microsoft and retired military leaders. AI is a strategic technology, and we need a nuanced approach to regulating the sector.
Agreed. The government needs to be transparent about its reasoning and work constructively with AI companies to address any legitimate security concerns, rather than resorting to broad bans or designations.
This is an interesting development. It’s concerning to see the Pentagon restricting AI companies like Anthropic, which could hamper innovation. I’m curious to see how this dispute plays out and whether Microsoft and the military leaders can successfully challenge the government’s actions.
Absolutely, the Pentagon’s designation of Anthropic as a national security risk seems heavy-handed. I hope the court sees merit in Microsoft and the retired military leaders’ arguments.
It’s good to see Microsoft and seasoned military leaders standing up for Anthropic. AI is a critical technology, and the government needs to take a measured, transparent approach when addressing security concerns with companies in this space.
Absolutely. The stakes are high, and we need to ensure the government’s actions don’t inadvertently hamstring the development of valuable AI capabilities that could benefit national security.
This case raises important questions about balancing national security, innovation, and the role of government oversight. I hope the court can help chart a path forward that protects critical interests while still allowing promising AI firms like Anthropic to thrive.
Well said. The legal challenge seems warranted given the potential economic and technological ramifications of the Pentagon’s actions against Anthropic. A more collaborative approach would likely serve the public interest better.
The dispute between Anthropic and the Pentagon is a complex one, with valid concerns on both sides. I’m glad to see influential voices like Microsoft and retired military leaders weighing in to push for a more balanced, transparent approach.
Absolutely. The development of AI technology is crucial for national security, so the government needs to engage constructively with companies in this space rather than resorting to heavy-handed tactics that could stifle innovation.
Microsoft and the retired military leaders make some compelling points. Designating Anthropic as a national security risk based on a contract dispute could have wider economic ramifications that may not be in the public interest. I’ll be interested to see how the court rules on this.
This is a complex issue with valid concerns on both sides. The Pentagon likely has security reasons for its actions, but the legal challenge raises valid questions about the process and potential unintended consequences.
The Pentagon’s move to blacklist Anthropic is concerning. AI technology is crucial for advancing national security capabilities. I hope the legal challenge can shed light on the reasoning behind this decision and lead to a more constructive approach.
Agreed. The government needs to balance national security with supporting innovative AI companies that could provide valuable technology. This seems like an overly broad and potentially damaging action against Anthropic.
This is a challenging issue that highlights the need for nuanced policymaking around emerging technologies like AI. I hope the legal process can shed light on the Pentagon’s rationale and lead to a more collaborative approach that fosters innovation while addressing legitimate security concerns.
Well said. The government should strive to strike the right balance, rather than taking actions that could have significant unintended consequences for the tech sector and national security capabilities.
This dispute highlights the tension between national security needs and supporting technological innovation. I hope the court can help find a balanced solution that addresses the Pentagon’s concerns while still allowing companies like Anthropic to thrive.
Well said. It’s a complex issue without easy answers, but the government should strive for an approach that fosters collaboration and progress in the AI field.