OpenAI is sharing more details about its agreement with the Pentagon


By CEO Sam Altman’s admission, OpenAI’s deal with the Department of Defense was “definitely rushed,” and “the theories don’t look good.”

after Negotiations between Anthropic and the Pentagon failed On Friday, President Donald Trump directed federal agencies to no longer use Anthropic technology A transition period of six monthsDefense Minister Pete Hegseth said he was classifying the AI ​​company as a supply chain risk.

then, OpenAI quickly announced They made their own deal to deploy models in classified environments. With Anthropic saying it draws red lines around using its technology in fully autonomous weapons or mass domestic surveillance, and Altman saying OpenAI has the same red lines, there were some obvious questions: Has OpenAI been honest about its safeguards? Why was it able to reach an agreement while Anthropics could not?

So, while OpenAI executives defended the agreement on social media, the company posted as well A blog post explaining her approach.

In fact, the publication cited three areas in which it said OpenAI’s models could not be used — mass domestic surveillance, autonomous weapons systems, and “high-risk automated decisions (e.g. systems like Social Credit).”

The company said that unlike other AI companies that have “lowered or eliminated their safety barriers and relied primarily on usage policies as their primary safeguards in national security deployments,” the OpenAI agreement protects its red lines “with a more broad, multi-layered approach.”

“We retain full discretion over our security stack, deploy via the cloud, have authorized OpenAI personnel on hand, and have strong contractual protections,” the blog said. “This is all in addition to the strong protections found in US law.”

TechCrunch event

San Francisco, California
|
October 13-15, 2026

“We don’t know why Anthropic couldn’t reach this deal, and we hope they and other labs will consider it,” the company added.

After publishing the post… claimed Mike Masnick of Techdirt The deal “fully permits domestic surveillance,” because it stipulates that private data collection will comply with it Executive Order No. 12333 (Along with a number of other laws). Masnick described this as “how the NSA conceals its internal surveillance by wiretapping communications *outside the United States* even if they contain information from/about US persons.”

in Posted on LinkedInKatrina Mulligan, head of national security partnerships at OpenAI, said that much of the discussion over contract language assumes that “the only thing standing between Americans and the use of AI for mass domestic surveillance and autonomous weapons is a single-use policy provision in a single contract with the War Department.”

“That’s not how any of this works,” Mulligan said, adding, “The deployment architecture is more important than the contract language (…) By limiting our deployment to the cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware.”

Altman also fielded questions about the deal on X, where he was He admitted it was rushed It led to significant backlash against OpenAI (to the extent that… Anthropic’s Claude outperformed OpenAI’s ChatGPT in the Apple App Store on saturday). So why do that?

“We really wanted to de-escalate things, and we thought the deal offered was a good one,” Altman said. “If we are right and this leads to a lull between the (War Department) and industry, we will look like geniuses, a company that has taken a lot of pain to do things to help industry. If not, we will continue to be described as (…) hasty and incautious.”

Leave a Reply

Your email address will not be published. Required fields are marked *