The Department of Justice says humans cannot be trusted in combat systems


Trump administration It argued in a lawsuit Tuesday that it did not violate Anthropic’s First Amendment rights by classifying an AI developer as a supply chain risk and expected that Company lawsuit Against the government you will fail.

“The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic has said nothing to support such a radical conclusion,” the US Department of Justice lawyers wrote.

The response was filed in federal court in San Francisco, one of two places where Anthropic is located Challenging the Pentagon’s decision to impose sanctions A flagged company can prevent companies from receiving defense contracts due to concerns about potential security vulnerabilities. Anthropic says the Trump administration exceeded its authority in enforcing the designation and blocking the use of the company’s technology within the department. If the rating holds, Anthropic could lose up to billions of dollars in expected revenues this year.

Anthropic wants to resume business as usual until the lawsuit is resolved. Rita Lin, the judge overseeing the San Francisco case, has scheduled a hearing for next Tuesday to decide whether to honor Anthropic’s request.

Justice Department lawyers, writing for the Defense Department and other agencies in Tuesday’s filing, called Anthropic’s concerns about the potential loss of its business “legally insufficient to constitute irreparable harm” and called on Lane to deny the company a reprieve.

The lawyers also wrote that the Trump administration was motivated to act because of “concerns about Anthropic’s potential future conduct if it retains access” to government technology systems. “No one has claimed to restrict anthropic expressive activity,” they wrote.

The government argues that Anthropic’s quest to limit how the Pentagon uses its AI technology led Defense Secretary Pete Hegseth to “reasonably” determine that “Anthropic employees may maliciously sabotage or provide unwanted functionality, or subvert the design, integrity, or operation of a national security system.”

The Department of Defense and Anthropic are fighting over potential limitations on the company’s Claude AI models. Anthropic believes that its models should not be used to facilitate widespread surveillance by Americans, and that they are not currently reliable enough to operate fully autonomous weapons.

Multiple legal experts previously told WIRED that Anthropic has a strong case that the supply chain action amounts to unlawful retaliation. But courts often favor the government’s national security arguments, and Pentagon officials have described Anthropic as a rogue contractor whose technology cannot be trusted.

“In particular, DoW became concerned that allowing humans continued access to DoW’s technical and operational combat infrastructure would create unacceptable risks in DoW supply chains,” Tuesday’s filing states. “AI systems are highly vulnerable to tampering, and Anthropic can attempt to disrupt its technology or proactively change its model behavior either before or during ongoing combat operations, if Anthropic feels, in its discretion, that its company’s ‘red lines’ have been crossed.”

The Department of Defense and other federal agencies are working to replace Anthropic’s AI tools with products from competing technology companies in the next few months. One of the most important military uses of Claude is through… Palantir data analysis softwarepeople familiar with the matter told WIRED.

In a filing Tuesday, lawyers argued that the Pentagon “cannot simply flip the switch when Anthropic is currently the only AI model authorized for use” in the department’s “classified systems and ongoing high-intensity combat operations.” The department is deploying AI systems from Google, OpenAI and xAI as alternatives.

A number of companies and groups of them Artificial intelligence researchersMicrosoft, a labor union for federal employees, and former military leaders have filed court briefs in support of Anthropic. None of them were provided for government support.

Anthropics has until Friday to submit a counter-response to the government’s arguments.

Leave a Reply

Your email address will not be published. Required fields are marked *