Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

when Anthropy last year It became the first major Artificial intelligence company After it was licensed by the US government for secret use – including military applications – the news made little noise. But this week a second development hit like a cannonball: the Pentagon Reconsider their relationship With the company, including a $200 million contract, ostensibly because the safety-conscious AI company objects to participating in some deadly operations. The so-called War Department has classified Anthropic as a “supply chain risk,” a scarlet letter typically reserved for companies that do business with countries under scrutiny by federal agencies, such as China, which means the Pentagon will not do business with companies that use Anthropic AI in its defense work. In a statement to WIRED, the Pentagon’s chief spokesman, Sean Parnell, confirmed that Anthropic was in the hot seat. “Our nation demands that our partners be ready to help our warfighters win any battle,” he said. “At the end of the day, this is about our forces and the safety of the American people.” This is a message for other companies as well: OpenAI, xAI, and Google, which Currently the Ministry of Defence Unclassified labor contracts, jumping through the hoops required to obtain their own high-end licenses.
There’s a lot to unpack here. For one thing, there’s a question about whether Anthropic will be punished for complaining about the fact that its AI model, Claude, was used as part of the raid to oust Venezuelan President Nicolas Maduro (and here’s why). What is reported; The company denies this.) There’s also the fact that Anthropic publicly supports regulation of AI, which is an odd position in the industry and at odds with management policies. But there is a bigger and more troubling problem. Will government claims for military use make AI itself less safe?
Researchers and executives believe that artificial intelligence is the most powerful technology ever invented. Almost all current AI companies have been founded on the premise that it is possible to achieve artificial general intelligence, or superintelligence, in a way that prevents harm at scale. Elon Musk, founder of xAI, has been a big proponent of reining in AI. He co-founded OpenAI because he feared the technology was too dangerous to be left in the hands of profit-seeking companies.
Anthropic has carved out a space as the most safety conscious ever. The company’s mission is to deeply integrate guardrails into its models so that bad actors cannot exploit AI’s darkest potential. Isaac Asimov said it first and best in his book Robotics laws: A robot may not harm a human being, or allow a human being, through inaction, to harm it. Even when AI becomes smarter than any human on Earth – a possibility in which AI leaders believe fervently – these barriers must remain.
So it seems paradoxical that leading AI labs are striving to introduce their products into cutting-edge military and intelligence operations. As the first major laboratory on a classified contract, Anthropic provides the government with a “A custom collection of Claude Joffe models designed exclusively for US National Security customers.” However, Anthropic said it did so without violating its own safety standards, including a ban on using CLOUD to produce or design weapons. Anthropic CEO Dario Amodei He specifically said He doesn’t want Claude involved in autonomous weapons or government surveillance of AI. But this may not work with the current administration. Department of Defense CTO Emil Michael (former Uber CEO) told reporters this week The government will not tolerate an AI company limiting how the military uses AI in its weapons. “If there’s a swarm of drones coming out of a military base, what are your options for shooting them down? If human reaction time isn’t fast enough… how are you going to do it?” he asked rhetorically. So much for the first law of robotics.
There is a good argument that effective national security requires the best technologies from the most innovative companies. While some technology companies backed away from working with the Pentagon, even a few years ago, in 2026 they are generally flag-waving potential military contractors. I have yet to hear any AI executive talk about their models being linked to lethal force, with the exception of Palantir CEO Alex Karp Not ashamed to say itWith evident pride, “Our product is sometimes used to kill people.”