Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

It’s Pentagon ultimatum day looming over Anthropy: Allow the US military Unlimited access For its technology, including mass surveillance and fully autonomous lethal weapons, or potentially be labeled a “supply chain risk” and potentially lose hundreds of billions of dollars in contracts. Amid mounting public statements and threats, tech workers across the industry are looking at their companies’ government and military contracts and wondering what kind of future they are helping to build.
While the Department of Defense has spent weeks negotiating with Anthropic over removing its guardrails, including allowing the US military to use Anthropic’s AI kill targets without human oversight, OpenAI and xAI have It is said I have already agreed to such terms, though so has OpenAI It is said Trying to adopt the same red lines in anthropological conventions. The overall situation has left employees at some companies with defense contracts feeling betrayed. “When I joined the technology industry, I thought technology was about making people’s lives easier,” said one Amazon Web Services employee. Edge“But now it seems it’s all about making it easier to monitor, deport and kill people.”
In conversations with Edgecurrent and former employees at OpenAI, xAI, Amazon, Microsoft, and Google expressed similar sentiments about the changing ethical landscape of their companies. Organized groups representing 700,000 technology workers at Amazon, Google, Microsoft and others I signed a letter Asking companies to reject the Pentagon’s demands. But many of them saw little opportunity for their employers — whether directly involved in this conflict or not — to question or respond to the government.
“From their perspective, they want to keep making money without having to talk about it,” said one Microsoft software engineer.
So far, Anthropic has held its ground. Anthropic CEO Dario Amodei put forward a statement “The Pentagon’s threats do not change our position: We cannot in good conscience comply with their request,” he said Thursday. But he stated that he was not at all opposed to lethal autonomous weapons at some point in the future, but rather that the technology was not reliable enough “today.” Amodei offered to partner with the Department of Defense in “research and development to improve the reliability of these systems, but they did not accept this offer,” he wrote in the statement.
However, in the past few years, big tech companies have done just that Relaxed its rules or changed its mission statements To expand into lucrative government or military contracts. In 2024, OpenAI removed the ban on “military and wartime” use cases from its terms of service; After that, it signed a deal with independent arms maker Anduril and then a contract with the Department of Defense, and just this week, Anthropic It changed the often touted responsible expansion policyabandoning its long-standing safety pledge in order to ensure it remains competitive in the AI race. Major technology companies such as Amazon, Google, and Microsoft have also allowed defense and intelligence agencies to use their artificial intelligence products, including… Some agree to work with ICE Despite growing protest from the public and staff alike.
In past years, tech workers’ resistance to partnerships and deals they view as harmful to society as a whole has sometimes led to significant change. In 2018, for example, thousands of Google employees successfully lobbied the company to end “Maven projectThe partnership with the Pentagon and Microsoft workers has provided leadership against ICE Petition Signed by About 500 Microsoft employees, despite Microsoft Still working with Agency. In 2020, following the killing of George Floyd, tech companies issued public statements and financial commitments supporting the Black Lives Matter movement. But in recent months, the industry has witnessed a very different reality: a culture of fear and silence, especially amid cooperation with the Trump administration, ICE, and tech workers. He said recently Edge.
The companies have followed in the footsteps of long-standing military technology and surveillance partnerships, which have become more stringent. This includes Palantir, co-founded by Peter Thiel and recently CEO Alex Karp male To shareholders, “Palantir is here to disrupt the institutions we share with the best in the world and, when necessary, to scare and sometimes kill enemies. We hope you’re on board.” (Protecting Democracy, a non-profit organization, recently put out a report Open letter Call for congressional oversight of the Department of Defense’s demands for unrestricted use of artificial intelligence. )
OpenAI, Google, Microsoft, xAI and Amazon did not immediately respond to requests for comment.
said a former XAI employee Edge“Everyone is already working on killer robots at this point,” he said, adding that he believes everyone will follow in the footsteps of Palantir, Anduril and xAI, since the government’s feeling is that if the company doesn’t acquiesce, it is “against the country’s interest, to some extent.” He said there is “a big push to work with the military, and the trend is that it’s cool to do it… you’re a patriot if you do it.”
One Google employee described the situation as a “disgusting display of control from Hegseth.” He added: “Over and over again, artificial intelligence presents us with choices about who we want to be and what kind of society and future we want. And they are coming at us fast, and with them, in fact, the least thoughtful and least principled leaders of power we can imagine. I can only thank Anthropic for insisting on the decent path and using its influence – indispensable – to chart a course towards a humane world and a humane future.”
The AWS employee said Edge that “boundaries have certainly been eroded in terms of which customers Big Tech companies want to court” and that there has been “a deliberate whitewashing of the effects of lucrative new deals.” She recently recalled receiving an email from an AWS executive touting a $580 million-plus contract with the US Air Force, among other partnerships, as evidence of Amazon’s AI successes, without acknowledging the broader scope or harms involved.
“If the government is going to pursue technologies like this, it needs to build them itself and be accountable for those decisions,” she said.
The erosion may have extended to internal culture as well, normalizing the idea that companies must always be watching. The AWS employee said she and her colleagues are being tracked about how much they use AI in their jobs, how often they work from the office, and more. “I can see myself and my co-workers becoming less sensitive to the surveillance we are exposed to at work, and I worry that this means we are obeying, conforming, and giving up too much up front,” she said.
The general sentiment within the AI industry over the past few weeks has “reopened the door to further discussion…about the values and future of the technology,” an OpenAI employee said. The situation between the Pentagon and Anthropologie, recent ICE headlines, and rapid advances in artificial intelligence were among the key factors that opened up those discussions internally, the employee said.
However, people who are immigrants or in more vulnerable situations are more afraid to speak up, the OpenAI employee said.
Anthropic appears to be in a position to say no and stay afloat, the former XAI employee said. Its focus on enterprise rather than consumer AI businesses may make it more sustainable even without government contracts, providing it some leverage. “I was surprised to see them stand on some form of principle,” one Microsoft software engineer said of Anthropic, speaking generally. “I don’t know how long this will last.”
“Are you going to continue?” It seems to be the question on everyone’s lips. The Pentagon has already done that It is said It has asked two major defense contractors, Boeing and Lockheed Martin, to provide information about their reliance on Anthropic’s CLOUD, as it moves to classify Anthropic as a “supply chain risk,” a designation typically reserved for… Threats to national security It is rarely assigned to a US company. It is too It is said You might consider invoking the Defense Production Act to try to force Anthropic to comply with its request.
Just as with any other AI company, if Anthropic backs down, the Microsoft employee said, there will be little chance it or others will back away from killer and surveillance robots. “Once you get into the Department of Defense or whatever we call it now… I think it might be difficult for them to actually have the oversight that they claim. It would be more profitable for them to give themselves permission to do the thing that makes the most money.”
In the case of Microsoft itself, he said he does not expect the company to adhere to any kind of ethical principles. The company has worked extensively with the Israel Defense Forces, including mass surveillance of Palestinians and dissidents, Despite the employees’ protest. (He said that He finished some parts for partnership last year.)
Another Microsoft employee said Edge that although “Microsoft is committed to responsible AI, it is currently trying to play both sides for profit rather than making a meaningful commitment to responsible AI.”
But this is nothing new, said one employee of the AI startup. In her view, the boundaries were often “fuzzy, especially in AI,” regarding the types of things companies were willing to allow with technological power. “A lot of it has been happening under the surface for as long as AI has been around.”
“We need solidarity across technology and a cohesive, worker-led vision for AI now more than ever,” the AWS employee stressed.
“The safeguards Anthropic is trying to maintain are neither mass surveillance of Americans nor fully autonomous weapons, which just means they want a human in the loop if a machine is going to kill someone,” she added. “Even if this technology were perfect — which it is not — I think most Americans would not want machines killing people without human supervision in an America that has become an AI-powered mass surveillance state.”