It’s official: The Pentagon has called Anthropic a supply chain risk


The Department of Defense (DOD) has formally notified Anthropic Command that the company and its products have been designated as supply chain risks, Bloomberg Quoting a senior official in the ministry.

The appointment comes weeks after conflict Between the Artificial Intelligence Laboratory and the Ministry of Defense. Anthropic CEO Dario Amodei to reject To allow the military to use its AI systems for mass surveillance of Americans or to operate fully autonomous weapons without humans assisting in targeting or firing decisions. The ministry said its use of artificial intelligence should not be limited to a private contractor.

Supply chain risk designations are typically reserved for foreign liabilities. The label requires any company or agency working with the Pentagon to confirm that it does not use anthropogenic models.

The Pentagon findings threaten to disrupt the company and its own operations. Anthropic was the only leading AI lab with systems ready for classification. The US military is currently relying on Claude in its campaign against Iran, as US forces use artificial intelligence tools to quickly manage data for their operations. Cloud is one of the key tools installed in Palantir’s Maven Smart System, which military operators in the Middle East rely on, according to Bloomberg.

Many critics say calling Anthropologie a supply chain risk over this dispute is an unprecedented move by the department. Dean Paul, Trump’s former White House artificial intelligence adviser, said so indicated in the label As the “death rattle” of the American republic, the US government has abandoned strategic clarity and respect in favor of “thuggish” tribalism that treats domestic innovators worse than their foreign adversaries.

Hundreds of employees from OpenAI and Google did this He urged the Ministry of Defense to withdraw classification and called on Congress to reverse what could be considered an inappropriate use of power against an American technology company. They also urged their leaders to do so We stand together To continue to reject demands from the Department of Defense to use its own AI models for domestic mass surveillance and “kill people autonomously without human supervision.”

TechCrunch has reached out to Anthropic for comment.

TechCrunch event

San Francisco, California
|
October 13-15, 2026

Amid the conflict, OpenAI entered into its own agreement with the department to allow the military to use its AI systems for “all lawful purposes.” Some of the company employees They expressed concern about the deal’s vague wording, which could lead to exactly the kind of uses Anthropic was trying to avoid.

Amodei described the Ministry of Defense’s actions as “retaliatory and punitive.” It is said He said that his refusal to praise President Trump or donate to him contributed to the dispute with the Pentagon. OpenAI’s president, Greg Brockman, has been a staunch Trump supporter and recently donated $25 million to MAGA Inc. Super PAC.

Leave a Reply

Your email address will not be published. Required fields are marked *