Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

President Donald Trump on Friday called on US federal agencies to stop using it Anthropic Claude Artificial Intelligence After the company refused to grant the Ministry of Defense permission to use it for mass domestic surveillance or for fully autonomous weapons systems.
President to publish On the platform Truth Social, which he owns, he is ordering the federal government to “immediately stop” using Anthropic’s tools, saying there will be a six-month phase-out for agencies like the Department of Defense. He also denounced Anthropic as a “radical leftist, woke company.” This post marks the latest step in the standoff that escalated significantly this week between Anthropic and the federal government.
Cloud is widely used in the Pentagon, including in classified systems, but the Trump administration has sought to use the technology for “any lawful purpose.” Anthropic has insisted in its current contract that the technology will not be used for mass surveillance of Americans or autonomous offensive weapons systems without human intervention.
Earlier this week, Defense Minister Pete Hegseth Anthropic CEO Dario Amodei said That it would use rarely used powers to either force Anthropic to allow the Pentagon to use Claude for any lawful purpose or designate the company as a supply chain risk — jeopardizing its use by government or defense contractors. Hegseth gave Anthropic a Friday deadline to comply.
Anthropic CEO Dario Amodei said in a… statement The company, which was founded with a stated focus on AI safety, “cannot in good conscience comply with (the Pentagon’s) request” to eliminate contract provisions that stipulate that CLOUD cannot be used in fully autonomous weapons systems or for domestic surveillance.
Read more: Amazon’s Ring cameras delve deeper into police and government surveillance
Amodei raised concerns that the law had not kept pace with the possibility of mass surveillance of Americans. The government can already buy information like Americans’ private browsing histories and individual movement logs without a warrant, but artificial intelligence increases the risks. “Powerful AI makes it possible to aggregate this scattered and individually innocuous data into a comprehensive picture of anyone’s life — automatically and at scale,” he wrote.
It is common in contract law for participants to seek clarification of terms, Michael Pastor, dean of technology law programs at New York Law School, said in an email. “Anthropic is right to press hard on what ‘legitimate purposes’ means,” he said. “If the Pentagon is unwilling to clarify whether it will use Anthropic’s technology for mass domestic surveillance, that raises flags that Anthropic appears justified in waving.”
Anthropic’s Claude is said to be the most widely used AI system by the US military. Alternatives could include tools from OpenAI, Google, or Elon Musk’s xAI.
In an internal memo This was reported by the Wall Street Journal On Friday, OpenAI CEO Sam Altman reportedly told employees that the company has the same red lines as Anthropic — no mass domestic surveillance or autonomous offensive weapons. Altman said he believes these guardrails can be managed through technical requirements, such as requiring models to be published in the cloud. (Disclosure: Ziff Davis, the parent company of CNET, in 2025 filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’s copyrights in training and operating its AI systems.)
Google and OpenAI employees circulated a Petition They call on their companies to stand with Anthropic in refusing to allow the use of AI models for local mass surveillance or fully autonomous lethal weapons systems. The petition said that the Pentagon “is trying to divide each company out of fear that the other will surrender. This strategy will only succeed if none of us knows the position of the others.”
As in consumer technology, AI systems have seen widespread adoption in government and military situations. These tools have seen significant growth in their capabilities in just the past few years, and this pace of change has not slowed. Regulation and oversight of AI has not continued. AI has amplified the potential harms of corporate or government surveillance by making it easier and less expensive.
This dispute could have major ramifications for how much leverage governments and technology companies have against each other when their views conflict about the appropriate use of technology, Pastor said. “One might feel that surrendering here opens a Pandora’s box of uses for which Claude can be deployed,” he said.