Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

In just over a week, negotiations over the Pentagon’s use of Anthropic’s Cloud technology fell through, dooming the Trump administration to failure. Anthropy has been described as a risk to the supply chainThe artificial intelligence company said it would fight the designation in court.
Meanwhile, OpenAI quickly announced a deal of its own, sparking backlash Users uninstall ChatGPT and Anthropic’s push pushed Claude to the top of the App Store charts. and At least one OpenAI executive has resigned Due to concerns about rushing the announcement without proper guardrails.
In the last episode of TechCrunch’s Stocks PodcastKirsten Korosek, Sean O’Kane, and I discussed what this means for other startups seeking to work with the federal government, especially the Pentagon, as Kirsten asked: “Are we going to see a bit of a change in course?”
This is an unusual situation in many ways, partly because OpenAI and Cloud make products “that no one can shut up about,” Sean noted. More importantly, this is a dispute about “how their technology is or is not used to kill people” so it would naturally draw more scrutiny.
However, Kirsten believes that this situation should “give any startup pause.”
Read a preview of our conversation, edited for length and clarity, below.
Kirsten: I wonder if other startups are starting to look at what’s happened with the federal government, specifically the Pentagon and Anthropic, that debate and wrestling, and (pause) about whether they want to go after federal dollars. Will we see a slight change in tune?
TechCrunch event
San Francisco, California
|
October 13-15, 2026
Shawn: I wonder about that too. I think not, to some extent, in the near term, just because when you really try to think about all the different companies, whether they’re startups or even more established Fortune 500 companies that work with government and in particular with the Department of Defense or the Pentagon, that work flies under the radar.
General Motors makes defense vehicles for the military and has done so for a very long time and has worked on all-electric versions of those vehicles and self-driving versions. There are things like this that happen all the time and never touch the zeitgeist. I think the problem that OpenAI and Anthropic have faced over the past week is that these companies make products that are used by a large number of people — and also, more importantly, that no one can shut up about.
So there’s such a spotlight on them, which naturally highlights their involvement to a level that I think most other companies contracting with the federal government — and in particular, any of the combat elements of the federal government — don’t necessarily have to deal with.
The only caveat I would add to that is a lot of controversy around this discussion between Anthropic and OpenAI and the Pentagon about how their technologies are or aren’t being used to kill people, or on parts of missions that kill people. Not only do you care about them and know their brands, there’s an additional element that I feel is more abstract when you think of GM as a defense contractor or anything else.
I don’t think we’ll see, like Applied Intuition or any of these other companies that have been framing themselves as dual-use, pull back much, just because I don’t see the spotlight on them and there’s not some kind of shared understanding of what that impact might be.
Anthony: This story is very unique and specific to these companies and personalities in many ways. I mean there was a lot of Really interesting thought pieces Topic: What is the role of technology in government? (Who) is artificial intelligence in government? I think these are all good and worthwhile questions to ask and explore.
However, I also think that this is a very strange lens through which to examine some of these things because Anthropic and OpenAI are not really that different in a lot of ways or the positions that they take. that it no Like one company says: “Hey, I don’t want to work with the government” and another says: “Yes, I do.” Or someone says: “You can do whatever you want.” And (the other) says: No, I want to have restrictions. And they both, at least publicly, say: “We want to impose limits on how we use our AI.” Anthropists seem to be digging in their heels a lot about: you can’t change terms like that.
On top of that, there also seems to be a personal layer where Anthropic CEO and Emil Michael — who many TechCrunch readers may be… Remember from his Uber daysHe is now the chief technology officer at the Department of Defense. It’s clear they don’t really like each other. It is said.
Shawn: Yes, there is a very big ‘girl fight’ element here that we shouldn’t ignore.
Kirsten: Yes a little. There is, but the effects are a little stronger than that. Again, backing up a bit, what we’re talking about here is the Pentagon and Anthropics getting into a conflict that Anthropics seems to have lost, although I have to say they’re still largely being used by the military. It’s considered a crucial technology, but OpenAI has kind of stepped in, and that’s evolving and will likely change by the time this episode comes out.
The backlash has been interesting for OpenAI, as we’ve seen a lot of it I think ChatGPT uninstalls are up 295% After OpenAI closed the deal with the Department of Defense.
To me, all of this is noise about what is really critical and serious, which is that the Pentagon was seeking to change the existing terms of an existing contract. This is really important and should give any startup pause because the political machine that’s happening now, especially with the Department of Defense, looks different. This is not normal. Contracts take a long time to be accepted at the government level, and the fact that they are seeking to change these terms is a problem.