Anthropy vs. the Pentagon: What’s really at stake?


The last two weeks have been marked by A clash Anthropic CEO Dario Amodei and Defense Minister Pete Hegseth are in the fight over the military’s use of artificial intelligence.

Anthropic refuses to allow its AI models to be used for mass surveillance of Americans or for fully autonomous weapons that launch strikes without human intervention. Meanwhile, Minister Hegseth said the Ministry of Defense should not be constrained by vendor rules, arguing that any “lawful use” of the technology should be allowed.

Thursday, Amodei pointed out publicly Anthropic isn’t backing down — despite the threats his company may be labeled a supply chain risk as a result. But with the news cycle moving so quickly, it’s worth reconsidering exactly what’s at stake in the battle.

At its core, this battle is over who controls powerful AI systems: the companies that build them, or the government that wants to deploy them.

What causes anthropic anxiety?

As we said above, Anthropic does not want its AI models to be used for mass surveillance of Americans or for autonomous weapons with no humans in the loop to make targeting and shooting decisions. Traditional defense contractors typically have little say in how their products are used, but Anthropic has argued from its inception that AI technology poses unique risks and thus requires unique safeguards. From the company’s perspective, the question is how to maintain those safeguards when the technology is used by the military.

The US military already relies on highly automated systems, some of which are lethal. Historically, the decision to use lethal force has been left to humans, but there are few legal restrictions on the military use of autonomous weapons. The Department of Defense does not categorically prohibit fully autonomous weapons systems. According to A 2023 Department of Defense GuidanceAI systems can select and engage targets without human intervention, as long as they meet certain criteria and pass review by senior defense officials.

This is exactly what makes the Anthropist nervous. Military technology is secret by nature, so if the US military is taking steps to automate lethal decision-making, we may not know about it until it is operational. If anthropic models are used, they may be considered “legal use.”

TechCrunch event

Boston, MA
|
June 9, 2026

Anthropic’s position is not that such uses should be permanently off the table. The problem is that their models are not capable enough to support them safely yet. Imagine an autonomous system that misses a target, escalates a conflict without human permission, or makes a deadly split-second decision that no one can undo. If you put a less capable AI in charge of weapons, you’d get a machine that’s very fast and very confident, and is bad at making high-risk calls.

AI also has the potential to increase legal surveillance of American citizens to an alarming degree. Under current US laws, surveillance of US citizens is already possible, whether through the collection of text messages, emails and other communications. AI changes the equation by enabling automated pattern detection at scale, entity resolution across datasets, predictive risk scoring, and ongoing behavioral analysis.

What does the Pentagon want?

The Pentagon’s argument is that it should be able to deploy Anthropic technology for any lawful use it deems necessary, rather than being constrained by Anthropic’s internal policies on things like autonomous weapons or surveillance.

More specifically, Minister Hegseth said that the DoD should not be constrained by vendor rules and that it would participate in the “lawful use” of the technology.

Sean Parnell, a Pentagon spokesman, said in a press conference: Share Thursday The ministry has no interest in conducting mass domestic surveillance or deploying autonomous weapons.

“This is what we are asking: that the Pentagon be allowed to use the Anthropic Model for all legitimate purposes,” Parnell said. “This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially endangering our warfighters. We will not allow any company to dictate the terms of how operational decisions are made.”

He added that Anthropic has until 5:01 p.m. ET on Friday to make a decision. “Otherwise, we will terminate our partnership with Anthropic and consider it a risk to DOW’s supply chain,” he said.

Despite the Department of Defense’s position that it simply did not believe it should be limited to the company’s own use policies, Secretary Hegseth’s concerns about anthropology sometimes seemed linked to cultural injustices. in Speech at SpaceX and xAI offices in JanuaryHegseth criticized “A.I. awakening” in a speech seen by some as a preview of his feud with Anthropics.

“The War Department’s AI will not be awakened,” Hegseth said. “We build weapons and systems ready for war, not chatbots for the Ivy League faculty lounge.”

So what now?

The Pentagon has threatened to either declare Anthropic a “supply chain risk” — effectively blacklisting Anthropic from doing business with the government — or invoke the Defense Production Act (DPA) to force the company to tailor its model to the military’s needs. Hegseth Anthropic was given until 5:01 pm on Friday to respond. But as the deadline approaches, it’s anyone’s guess whether the Pentagon will follow through on its threat.

This is not a fight that either side can easily walk away from. Anthropic’s supply chain risk rating could mean “lights out” for the company, says Sachin Seth, a venture capitalist at Trousdale Ventures who focuses on defense technology.

However, he said, if Anthropics is dropped from the Department of Defense, it could be a national security issue.

“(The ministry) will have to wait six to 12 months for OpenAI or xAI to catch up,” Seth told TechCrunch. “This leaves a window of up to a year where they may not be working from the best model, but from the second or third best model.”

xAI is poised to become classified-ready and replace Anthropic, and it’s fair to say a certain owner Elon Musk speech On the issue that the company would have no problem giving the Ministry of Defense full control of its technology. recently Reports It suggests that OpenAI may adhere to the same red lines as Anthropy.

Leave a Reply

Your email address will not be published. Required fields are marked *