Tech
DOD says Anthropic’s ‘red lines’ make it an ‘unacceptable risk to national security’

The U.S. Department of Defense said on Tuesday evening that Anthropic poses an “unacceptable risk to national security,” marking the agency’s first rebuttal to the AI lab’s lawsuits challenging Defense Secretary Pete Hegseth’s decision last month to label the company a supply-chain risk. As part of its complaints, Anthropic had requested the court temporarily block the DOD from enforcing its label.
The crux of the DOD’s argument, made in a 40-page filing in a California federal court, is the concern that Anthropic might “attempt to disable its technology or preemptively alter the behavior of its model” before or during “warfighting operations” if the company “feels that its corporate ‘red lines’ are being crossed.”
Anthropic last summer signed a $200 million contract with the Pentagon to deploy its technology within classified systems. In later negotiations over the terms of the contract, Anthropic said it did not want its AI systems to be used for mass surveillance of Americans, and that the technology wasn’t ready for use in targeting or firing decisions of lethal weapons. The Pentagon contested that a private company shouldn’t dictate how the military uses technology.
In response, an Anthropic spokesperson pointed to CEO Dario Amodei’s late February statement: “Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.”
Chris Mattei, a lawyer specializing in First Amendment issues and a former Justice Department attorney, told TechCrunch there has been no investigation to support the DOD’s concerns of Anthropic potentially disabling or altering its AI models during warfighting operations. Without that evidence, the department’s argument fails to adequately explain how Anthropic’s negotiating position rendered it an “adversary,” Mattei argued.
“The government is relying completely on conjectural, speculative imaginings to justify a very, very serious legal step they’ve taken against Anthropic,” Mattei said. He added the department failed to “articulate a credible or even comprehensible rationale for why Anthropic’s refusal to agree to an ‘all lawful use’ provision rendered it a supply chain risk as opposed to a vendor that DOD simply didn’t want to do business with.”
Many organizations have spoken out against the DOD’s treatment of Anthropic, arguing that the department could have just ended its contract. Several tech companies and employees — including from OpenAI, Google, and Microsoft — as well as legal rights groups have filed amicus briefs in support of Anthropic.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
In its lawsuits, Anthropic accused the DOD of infringing on its First Amendment rights and punishing the company based on ideological grounds.
“In many ways, the government’s nonsensical arguments are themselves the best evidence that the administration’s conduct was plainly a retaliatory punishment for Anthropic’s refusal to agree to the government’s terms, which, contrary to the government’s brief, is a form of protected expression,” Mattei told TechCrunch.
A hearing on Anthropic’s request for a preliminary injunction is set for next Tuesday.
An Anthropic spokesperson told TechCrunch that its decision to seek judicial review does not change its “longstanding commitment to harnessing AI to protect our national security,” but that it’s a “necessary step” to protect its business, customers, and partners.
This article has been updated to include information from Chris Mattei, a constitutional rights lawyer, and comments from Anthropic.