Listen to the article
Defense Secretary Pete Hegseth delivered a stark ultimatum to artificial intelligence company Anthropic this week: either open its AI technology for unrestricted military use by Friday or risk losing its government contract.
The Defense Department warned it could designate Anthropic, creator of the AI chatbot Claude, as a supply chain risk. Officials also suggested they might invoke the Cold War-era Defense Production Act to give the military broader authority to use Anthropic’s products without the company’s approval.
This aggressive approach has sparked concerns among experts who note that using the Defense Production Act in this manner would be unprecedented and could face legal challenges. The confrontation highlights the growing tensions over AI’s role in national security and military applications.
The Defense Production Act, signed by President Harry S. Truman in 1950 during the Korean War, grants the federal government extensive powers to direct private companies to prioritize national defense needs. While initially created for wartime supply concerns, the law has evolved to address various emergencies, including terrorist attacks and natural disasters.
“It’s one of the government’s most powerful and adaptable industrial policy tools,” explained Joel Dodge, an attorney and director of industrial policy and economic security at the Vanderbilt Policy Accelerator. The act allows the president to require companies to prioritize government contracts deemed necessary for national defense and can authorize incentives to increase production of critical goods.
Anthropic stands as the last major AI developer refusing to supply its technology to a new U.S. military internal network. CEO Dario Amodei has consistently voiced ethical concerns about unrestricted government use of AI, particularly regarding autonomous armed drones and mass surveillance systems that could track dissent.
Pentagon officials maintain they have no interest in using AI for mass surveillance or developing fully autonomous weapons without human oversight. However, invoking the Defense Production Act could potentially force Anthropic to adapt its AI models to military needs without built-in safety limits or remove ethical restrictions from contract language.
“The DPA has never been used to compel a company to produce a product that it’s deemed unsafe, or to dictate its terms of service,” Dodge noted, underscoring the unprecedented nature of this potential application.
Both former President Trump in his first term and President Biden utilized the Defense Production Act during the COVID-19 pandemic to boost medical supplies. Biden also invoked it during the 2022 nationwide baby formula shortage and issued a 2023 executive order on AI that required companies to share safety test results with the government—an order Trump repealed at the start of his second term.
The law has also been used in other critical situations, including during California’s energy crisis under Presidents Clinton and Bush, and for hurricane recovery efforts in Puerto Rico in 2017. Currently set to expire on September 30 this year, the DPA requires periodic reauthorization by Congress.
For Anthropic, the path forward remains uncertain. Companies can typically contest government demands under the DPA if the requested product differs from what they normally produce or if terms are deemed unreasonable. Charlie Bullock, senior research fellow at the Institute for Law & AI, suggests litigation may be inevitable if neither side backs down.
Some observers have pointed out a contradiction in the Pentagon’s approach—threatening to label Anthropic a supply chain risk while simultaneously claiming its products are essential to national defense. By Thursday, defense officials appeared to be backing away from the DPA option, with Chief Pentagon spokesperson Sean Parnell stating on social media that if Anthropic didn’t agree to cooperate by Friday afternoon, “we will terminate our partnership with Anthropic and deem them a supply chain risk.”
“We will not let ANY company dictate the terms regarding how we make operational decisions,” Parnell added.
The outcome of this standoff could have far-reaching implications for the relationship between technology companies and the government. Dodge warns that if Anthropic yields to such pressure, it might open “a Pandora’s box of what the government could do to assert power and control over private companies”—potentially setting a precedent that extends well beyond artificial intelligence.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


20 Comments
This is a complex and high-stakes issue that touches on fundamental questions about the role of government, corporate autonomy, and the responsible development of emerging technologies. The Pentagon’s ultimatum to Anthropic raises valid concerns that warrant careful consideration.
I’m curious to see how this situation unfolds and whether it leads to a more collaborative approach between the public and private sectors on AI governance.
The confrontation between the Pentagon and Anthropic highlights the delicate balance between national security and technological innovation. While the government has a duty to protect the country, the use of the Defense Production Act in this context could set a troubling precedent.
I hope that this case leads to a constructive dialogue on how to develop AI responsibly and ethically, with appropriate safeguards and oversight in place.
The Pentagon’s ultimatum to Anthropic is a bold and concerning move that raises serious questions about the government’s approach to emerging technologies. While national security is crucial, the use of coercive measures like the Defense Production Act requires careful consideration of the broader implications.
This case will be a significant test of how the government and private sector can collaborate on AI development while upholding fundamental rights and freedoms.
The Pentagon’s ultimatum to Anthropic is a high-stakes game of chess between government interests and private innovation. While the military’s needs are understandable, the potential ramifications for the AI industry are significant.
This case will likely have far-reaching implications for how the government approaches emerging technologies in the future.
Invoking the Defense Production Act to compel Anthropic’s cooperation raises valid concerns about corporate rights and the boundaries of government power. This confrontation underscores the delicate balance between national security and individual/corporate freedoms.
I’m curious to see how the legal challenges play out and whether this sets a precedent for future AI-related disputes.
The growing role of AI in national defense is a complex issue. While the military has a duty to protect the country, unrestrained access to AI systems could set concerning precedents. This case highlights the need for clear guidelines and oversight.
Careful consideration of the long-term implications is crucial as the government navigates this emerging landscape.
Leveraging the Defense Production Act in this manner is an aggressive and unprecedented move. I’m concerned about the potential erosion of corporate autonomy and the precedent it could set for other AI companies.
Robust public discourse and legal scrutiny will be crucial to ensuring a balanced outcome that protects both national security and technological innovation.
This confrontation between the Pentagon and Anthropic underscores the need for clear, collaborative frameworks to govern the use of AI in national defense. Invoking the Defense Production Act unilaterally could have detrimental effects on the AI industry and public trust.
I hope that policymakers and industry leaders can come together to find a solution that balances security needs with the principles of innovation and civil liberties.
This is a fascinating development in the ongoing tension between national security needs and AI innovation. The Pentagon’s ultimatum to Anthropic raises important questions about the appropriate use of emerging technologies and balancing military priorities with civil liberties.
It will be interesting to see how Anthropic responds and whether the Defense Production Act can be legally invoked in this novel scenario.
The Pentagon’s ultimatum to Anthropic highlights the complex and delicate relationship between the government and the private AI sector. While national security is paramount, the use of coercive measures like the Defense Production Act raises serious ethical questions.
This case will be an important test of how the government navigates the rapidly evolving landscape of AI technology and its applications.