Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Monday Anthropy His lawsuit was filed Against the Department of Defense for being classified as a supply chain risk. Hours later, nearly 40 employees from OpenAI and Google — including Jeff Dean, Google’s chief scientist and head of Gemini — Filed an amicus brief in support of Anthropic’s lawsuitoutlining their concerns about the Trump administration’s decision and the risks and implications of the technology.
This news comes after an exciting few weeks for Anthropic, as the Trump administration designated the company as a supply chain risk — a designation typically reserved for foreign companies that the government deems a potential national security risk in some way — after Anthropic. I stood firm It has two red lines regarding acceptable use cases for military use of its technology: local mass surveillance and fully autonomous weapons (or AI systems with the ability to kill with… No human involvement). Negotiations collapsedfollowed by public insults and Other artificial intelligence companies Stepping in to sign contracts that allow “any lawful use” of their technology.
Not only does the supply chain risk designation prevent Anthropic from working on military contracts, it also blacklists other companies if they use Anthropic products in their line of work for the Pentagon, forcing them to uproot Claude if they want to keep their lucrative contracts. However, since the first model allowed for covert intelligence, Anthropic tools are already deeply integrated into the Pentagon’s work — so much so that just hours after Defense Secretary Pete Hegseth announced the designation, the US military reportedly used Claude in the campaign that killed Iranian leader Ayatollah Ali Khamenei.
The amicus curiae brief seeks to make the points that Anthropic’s supply chain risk designation “is inappropriate retaliation that harms the public interest” and that the concerns behind Anthropic’s red lines are “real and require a response.” The report also notes that the two red lines identified by Anthropic are worth reconsidering, stating that “mass domestic surveillance powered by artificial intelligence poses profound risks to democratic governance – even in responsible hands,” and that “fully autonomous lethal weapons systems represent risks that must also be addressed.”
The group behind the amicus brief described themselves as “engineers, researchers, scientists, and other professionals working at America’s frontier artificial intelligence laboratories.”
“We build, train, and study large-scale AI systems that serve a wide range of users and deployments, including areas relevant to national security, law enforcement, and military operations,” the group wrote. “We provide this brief not as spokespeople for any one company, but in our individual capacities as professionals with first-hand knowledge of what these systems can and cannot do, and what is at stake when their deployment goes beyond the legal and ethical frameworks designed to govern them.”
On the domestic mass surveillance front, the group said that while there is data on American citizens everywhere in the form of surveillance cameras, geolocation data, social media posts, financial transactions, and more, “what does not yet exist is an AI layer that turns this sprawling, fragmented data landscape into a unified, real-time surveillance apparatus.” These data streams are currently isolated, but if AI is used to connect them, it could combine “facial recognition data with location history, transaction histories, social graphs, and behavioral patterns across hundreds of millions of people simultaneously,” they wrote.
When it comes to lethal autonomous weapons specifically, the group said they can be unreliable in novel or unclear circumstances that don’t match the environment they were trained in — meaning they “cannot be trusted to identify targets with complete accuracy, nor are they able to make the fine contextual trade-offs between achieving a target and accounting for side effects that a human can.” Additionally, the group wrote, the ability of lethal autonomous weapons systems to hallucinate means that it is important for humans to participate in the decision-making process “before launching a lethal munition at a human target” — especially since the chain of system logic is often unavailable to operators and unclear even to system developers.
“We are diverse in our policies and philosophies, but united in the conviction that today’s frontier AI systems pose risks when deployed to enable domestic mass surveillance or operate autonomous lethal weapons systems without human oversight, and that these risks require some kind of guardrails, whether through technical safeguards or usage restrictions,” the group behind the amicus brief wrote.