Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

the Contract dispute The relationship between the US Department of Defense and artificial intelligence developer Anthropic, which broke at the end of February, revealed in stark terms how laws and regulations have affected… Failed to keep up With artificial intelligence capabilities.
The Pentagon wanted to be able to use Anthropic Claude I “For all lawful purposes,” while Anthropic wanted to prevent the military from using it for mass domestic surveillance or fully autonomous weapons systems. After Anthropic refused to meet the government’s demands, President Donald Trump and Defense Secretary Pete Hegseth said they would do so. Company Declaration “Supply Chain Risk”, Prohibiting the use of its products in defense contract work. Pentagon did, and Anthropy The lawsuit was filed Monday In federal court challenging the designation, calling it an “unprecedented and unlawful” attack on the company’s right to free expression.
Pentagon officials said the issue is moot because of current law Such monitoring is not permittedIt has no plans to use the tool for autonomous weapons systems. But the laws and regulations aren’t actually that clear, according to privacy and technology experts. A contract dispute between a private company and a federal agency is not the place to settle it.
“This week has exposed a real governance vacuum, and it should serve as a wake-up call for Congress,” said Hamza Chowdhury, lead for artificial intelligence and national security at the Future of Life Institute.
Read more: Congress is not stepping up its efforts to regulate AI. Where does that leave us now?
The immediate result of the contract dispute was the Pentagon Make a deal with OpenAI instead. The deal with OpenAI was less clear about restrictions on using the company’s products for mass surveillance or autonomous weapons, but OpenAI leaders said this week they had taken steps to strengthen those guardrails. CEO Sam Altman said in a… Share on X The Pentagon confirmed that it will not be used by the department’s intelligence agencies.
(Disclosure: Ziff Davis, the parent company of CNET, in 2025 filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’s copyrights in training and operating its AI systems.)
OpenAI Research Scientist Noam Brown Published on X He believes the world “should not rely on trust in AI labs or intelligence agencies” to ensure things like safety. “I know that legislation can be slow at times, but I fear the slippery slope as we become accustomed to circumventing the democratic process to make important policy decisions,” he wrote.
The question now is whether and how Congress will address these issues.
The big risk in using AI for domestic surveillance does not necessarily lie with Claude O ChatGPT Will Spy on Americans These tools will be used to turn data the government already has, or can buy from private data brokers without needing a warrant, into information that might require a warrant.
Personal data is already being collected from you, possibly from the device you are using to read this. It includes information about your browsing history, your location data, and who you talk to or associate with. Private companies, such as app developers, can collect that data even if you don’t realize it and sell it to other companies or intelligence agencies. But until recently, it has been difficult for governments to process all of this in a way that makes monitoring easy. Artificial intelligence has changed that.
Anthropic CEO Dario Amodei specifically cited this position in a February 26 statement And in detail the reasons for the company standing at its red lines. “Powerful AI makes it possible to aggregate this scattered and individually innocuous data into a comprehensive picture of anyone’s life – automatically and at scale.”
The other primary disagreement is that Anthropic wanted to prevent the Pentagon from giving Claude full control of the weapons system without “a human in the loop.” An artificial intelligence tool that is used to help set goals – It is also said to happen with Claude During the US war in Iran – this is not beyond the scope of Anthropic or any of the major AI companies, because the person is involved in the verification and decision-making. What the company objected to was the use of artificial intelligence models in making these decisions without human supervision. Current frontier models “simply are not reliable enough to operate fully autonomous weapons,” Amodei wrote.
Greg Noujaim, senior adviser and director of the Security and Surveillance Project at the Center for Democracy and Technology, said it’s clear that AI experts don’t think the models are ready for these kinds of uses, if they are.
“It is striking that the Pentagon rejects this advice and insists on the ability to use this artificial intelligence tool to kill people without human intervention,” he said.
The Department of Defense has argued that it cannot actually use fully autonomous weapons, but Chowdhury told me more than that Commonly cited guidance (PDF) in this regard It doesn’t stop them sincere. The Department of Defense and Anthropics did not respond to CNET’s requests for comment on this story.
Regardless, experts say, the question of the use of such weapons is not one that can be solved by unelected federal bureaucrats, military commanders, or private companies. Elected officials need to take this into consideration.
The question of how AI should be regulated, and who should do it, is not new. The Trump administration has called for a light touch on telling AI companies what to do, despite evidence of harm ranging from… Chatbots encourage suicide to Erosion of personal privacy by artificial intelligence. States have tried to rein in AI developers to address these issues, but face opposition from the federal government Determined to determine how to engage with technology.
If AI is used by the military and federal spy services, the question of who should regulate it is clear: Congress.
“Unelected leaders in private sector companies cannot be relied upon to use a private contract to fill a gap that democratically elected legislators have not legislatively filled,” Chaudhry said. “What we need are legal redlines — clear, permanent, democratically enacted rules about what AI can and cannot be used for in national security contexts, as AI transforms national security.”
Noujaim said AI surveillance “is not the type of behavior that the military should self-permit.” Congress will consider reauthorizing part of the Foreign Intelligence Surveillance Act next month, and could use that opportunity to determine whether intelligence agencies need warrants when using purchased data.
“Ideally, Congress would step in and limit the government’s ability to purchase data about Americans and bypass court authorization requirements,” he said. “Ideally, Congress would set the rules around how the Department of Defense can protect Americans from AI-powered surveillance and set rules around the use of autonomous weapons that can kill without a human in the loop.”
Congress has a host of other AI-related regulatory issues to consider, but the debate over using AI for surveillance and autonomous weapons is interesting and could spur faster action.
The Pentagon’s retaliation against Anthropic — its formal designation this week of the company as a supply chain risk — could have a chilling effect on other companies concerned with how the government uses its technology.
“This sets a precedent where the government can retaliate against a company that imposed safety limits on the use of its technology because it knows more about the risks and reliability of its technology than the government does,” Njeim said. “This precedent will make us all less safe.”
Anthropic He said Thursday that it received a letter from the Department of Defense describing it as a supply chain risk and that the language of the letter was narrower than the broad threats made by administration officials the previous week. “With respect to our customers, this clearly applies only to the use of Claude by customers as a direct part of contracts with the War Department, and not all use of Claude by customers with such contracts,” Amodei said in a statement, using Hegseth’s preferred name for the department.
Despite the conflict and its designation as a supply chain risk, the US military has continued to use Anthropic tools, including in extensive ways during the current war in Iran. Amodei said Anthropic will continue to provide its AI models to the military and national security groups “at a nominal cost and with ongoing support from our engineers” as long as it is allowed to do so.
“Anthropology and the War Department have more in common than our differences,” Amodei said.