Anthropic denies being able to sabotage AI tools during war


Anthropic cannot manipulate CLOUD generative artificial intelligence model once the U.S. Army has it operational, an executive wrote in a lawsuit on Friday. This statement came in response to the Trump administration’s accusations against the company She will likely mess with her AI tools during the war.

“Anthropic never had the ability to cause Claude to stop working, change its functionality, close access to it, or impact or jeopardize military operations,” said Thiago Ramasamy, head of public sector at Anthropic. books. “Anthropic does not have the access required to disable the technology or change model behavior prior to or during ongoing operations.”

The Pentagon has been sparring with a leading AI lab for months over how its technology should be used for national security, and what the limits should be on that use. This month, Defense Secretary Pete Hegseth described him as a humanitarian Supply chain risksThis classification will prevent the Ministry of Defense from using the company’s software, including through contractors, in the coming months. Other federal agencies are also abandoning Claude.

Anthropic Two lawsuits were filed It challenges the constitutionality of the ban and seeks an emergency order to repeal it. However, customers have already started Cancel deals. A hearing in one of the cases is scheduled for March 24 in federal district court in San Francisco. The judge can decide to temporarily set back shortly after.

In a filing earlier this week, government lawyers wrote that the Department of Defense “is not required to tolerate the risk of critical military systems being compromised at pivotal moments for national defense and active military operations.”

The Pentagon was using Claude to analyze data, write memos, and help develop battle plans, WIRED said I mentioned. The government’s argument is that Anthropic could disrupt active military operations by stopping access to Claude or pushing out malicious updates if the company doesn’t approve certain uses.

Ramasamy rejected this possibility. “Anthropic does not maintain any backdoor or remote kill switch,” he wrote. “Human personnel, for example, cannot log into the DoW system to modify or disable models during the process; the technology simply does not work that way.”

He went on to say that Anthropic would only be able to provide updates with the approval of the government and its cloud provider, in this case Amazon Web Services, though he did not specify that by name. Ramasamy added that Anthropic does not have access to claims or other data that military users enter into Cloud.

Anthropologie executives assert in court filings that the company does not want veto power over military tactical decisions. Sarah Heck, Head of Policy books In a court filing on Friday, Anthropic was willing to guarantee the same amount in the contract proposed on March 4. “For the avoidance of doubt, Anthropic understands that this authorization does not grant or confer any right to control or veto lawful operational decision-making of the War Department,” the proposal said, according to the filing, which refers to an alternative name for the Pentagon.

Heck claimed the company is also willing to accept language that would address its concerns about using Claude to help carry out lethal strikes without human supervision. But the negotiations eventually collapsed.

Nowadays the Ministry of Defence He said In court filings it is “taking additional measures to mitigate supply chain risks” posed by the company by “working with third-party cloud providers to ensure human leadership cannot make unilateral changes” to Cloud systems currently in place.

Leave a Reply

Your email address will not be published. Required fields are marked *