What does Anthropic's designation as a supply chain risk mean for national security?
The fight between Anthropic and the Pentagon has reached a new level after the company refused to allow the unrestricted use of Claude
The Pentagon has escalated its dispute with Anthropic to an area usually reserved for systemic threats. Defense Secretary Pete Hegseth said on X that he ordered the company to be designated a “Supply-Chain Risk to National Security” under the presidential directive for the federal government to stop using Anthropic technology. In practice, this designation prevents contractors, suppliers, or partners working with the Department of Defense from maintaining business ties with the company, as they could be excluded from military contracts. Anthropic, for its part, maintains that the dispute did not stem from a technical failure or a foreign connection, but rather from usage limitations. The company claims the disagreement stalled over two exceptions it requested for Claude, related to mass domestic surveillance and fully autonomous weapons. The company also anticipated a legal battle, stating that it will challenge any “supply-chain risk” designation in court, calling it an unprecedented action with consequences for the ecosystem of suppliers who depend on clear rules to sell to the government.
What is the reason for the fight between the Pentagon and Anthropic?
The Department of Defense sought an “any lawful use” framework for Claude, while Anthropic attempted to preserve two specific safeguards, which ended up triggering threats and then Anthropic being designated a “supply-chain risk.”
In his public explanation, Dario Amodei argued that allowing mass domestic surveillance would be incompatible with democratic values, and that frontline AI systems are not yet reliable enough to operate fully autonomous weapons without endangering civilians and combatants themselves.
Amodei added an element that rarely appears in traditional trade disputes, He said the Defense Department had put forward two pressure tactics—the designation and the possibility of invoking the Defense Production Act to force the removal of safeguards—a combination he said was internally contradictory. It also sets a delicate precedent for other AI companies doing business with Washington. Anthropic has described the move as dangerous,suggesting the conflict will not end with a statement, but rather with litigation and new compliance clauses that could redefine what “lawful use” means when the client is the state.

