OpenAI and Google employees rush to defend Anthropic in DoD lawsuit


More than 30 employees at OpenAI and Google DeepMind submitted a statement Monday supporting Anthropic’s lawsuit against the U.S. Department of Defense after the federal agency classified the artificial intelligence company as a supply chain risk, according to court filings.

“The government’s designation of Anthropic as a supply chain risk was an inappropriate and arbitrary use of force that has serious ramifications for our industry,” said the memo, whose signatories included Google DeepMind chief scientist Jeff Dean.

late last week, The Anthropic Pentagon Supply chain risks — typically reserved for foreign adversaries — after the AI ​​company refused to allow the Department of Defense to use its technology to mass surveil Americans or autonomously launch weapons. The Department of Defense said it should be able to use AI for any “legitimate” purpose and not be tied to a private contractor.

A friend-of-the-court brief supporting Anthropic appeared on the docket just hours after the maker of Claude filed two lawsuits against the Department of Defense and other federal agencies. Wired He was the first to report the news.

in File a lawsuitGoogle and OpenAI employees point out that if the Pentagon was “no longer satisfied with the terms agreed upon in its contract with Anthropic,” the agency could have “simply canceled the contract and purchased the services of another leading AI company.”

In fact, the Department of Defense signed a deal with OpenAI within moments of Anthropic being designated a supply chain risk — a move protested by many employees of ChatGPT’s manufacturer.

“If allowed to proceed, this effort to sanction a leading U.S. AI company will undoubtedly have consequences for U.S. industrial and scientific competitiveness in AI and beyond,” the brief said. “This will dampen open deliberation in our field about the risks and benefits of current AI systems.”

TechCrunch event

San Francisco, California
|
October 13-15, 2026

The filing also confirms that Anthropic’s stated red lines are legitimate concerns that warrant strong guardrails. He argues that without a common law governing the use of AI, the contractual and technical restrictions that developers impose on their systems constitute a crucial safeguard against catastrophic misuse.

Several employees also signed the statement Signed open letters Over the past two weeks they have urged the Ministry of Defense to withdraw the label and Call to leaders Their companies to support Anthropic and reject the unilateral use of their AI systems.

Leave a Reply

Your email address will not be published. Required fields are marked *