Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

US President Donald Trump Announce On Friday, he instructed every federal agency to “immediately stop” using Anthropic’s AI tools. This step comes after Anthropic Senior officials have clashed for weeks over military applications of artificial intelligence.
“The left-wing nerds at Anthropic made a disastrous mistake while trying to strengthen the War Department,” Trump said in a post on Twitter. Social truth.
Trump said there will be a “six-month phase-out period” for agencies using Anthropic, which could allow time for further negotiations between the government and the AI startup.
The Pentagon and Anthropic did not immediately respond to requests for comment.
the Ministry of Defense It sought to change the terms of a deal struck with Anthropic and other companies last July to remove restrictions on how AI can be deployed and instead allow “all lawful use” of the technology. Anthropic has objected to this change, claiming that it could allow artificial intelligence to be used to fully control lethal autonomous weapons or conduct mass surveillance on American citizens.
The Pentagon does not currently use artificial intelligence in these ways, and has said it has no plans to do so. However, senior Trump administration officials have expressed opposition to the idea of a civilian technology company dictating the military use of such important technology.
Anthropic was the first major AI laboratory to work with the US Army, through… A deal worth $200 million She signed with the Pentagon last year. I have created several custom models known as Claude Gov which have fewer limitations than the regular ones. Google, OpenAI, and xAI signed similar deals around the same time, but Anthropic is the only AI company currently working with classified systems.
The Anthropic model is available through platforms provided by Palantir and Amazon’s cloud platform for classified military businesses. Claude Joffe is currently used largely for mundane tasks, such as writing reports and summarizing documents, but is also used for intelligence analysis and military planning, according to a source familiar with the situation who spoke to WIRED on the condition of anonymity because he is not authorized to discuss the matter publicly.
In recent years, Silicon Valley has gone from largely avoiding defense work to increasingly embracing it and eventually becoming full-fledged military contractors. The battle between Anthropics and the Pentagon is now testing the limits of this transformation. This week, several hundred workers from OpenAI and Google signed up Open letter Anthropists support and criticize their companies’ decisions to remove restrictions on the military use of AI.
In a memo sent to OpenAI employees today, CEO Sam Altman said the company agreed with Anthropic and also considered mass surveillance and fully autonomous weapons a “red line.” Altman added that the company will try to agree to a deal with the Pentagon that would allow it to continue working with the military, The Wall Street Journal I mentioned.
The public dispute between the Pentagon and Anthropics began after that Axios reported That American military commanders used Claude to help plan their operation to capture the President of Venezuela, Nicolas Maduro. After the operation, a Palantir employee conveyed anthropic concerns to U.S. military commanders about how to use its models. Anthropic denied raising concerns or interfering with the Pentagon’s use of its technology.
The dispute between Anthropic and the Department of Defense has escalated in recent days, with officials publicly exchanging barbs with the AI company on social media.
Defense Minister Pete Hegseth met with Anthropic CEO Dario Amodei earlier this week. He gave the company until Friday to commit to changing the terms of its contract to allow “all lawful use” of its models. Hegseth praised Anthropic’s products during the meeting and said the Defense Department wanted to continue working with Anthropic, according to a source familiar with the interaction who was not authorized to discuss it publicly.