Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

in the beginning year 2024, Anthropic, Google, deadand OpenAI They were united against the military use of their AI tools. But over the next 12 months, something changed.
In January, OpenAI Quietly cancelled It banned the use of AI for “military and warlike” purposes, and soon after was reported to be working on “a number of projects” with the Pentagon. In November, the same week that… Donald Trump After being re-elected as President of the United States, Meta announced that the United States and selected allies would be able to use the llama for defensive uses. A few days later, Anthropic announced that it would also allow its models to be used by the military and that it had entered into a partnership with the defense company. Palantir. At the end of the year, OpenAI Announce Its own partnership is with defense startup Anduril. Finally, in February 2025, Google revised its AI principles to allow the development and use of weapons and technologies that might harm people. Over the course of one year, concerns about the existential risks of AGI have almost disappeared, and military use of AI has become normalized.
Part of the change has to do with the enormous costs involved in building these models. Research on general purpose technologies ( last GPTs) have often highlighted the importance of the defense sector as a way to overcome adoption issues. “GPTs develop faster when there is a large, demanding, revenue-generating application sector,” said an economist David J. Tice Written in 2018“Like the US Department of Defense’s purchases of transistors and early microprocessors.” The soft budget constraints and long-term nature of defense contracting, combined with often unclear metrics for success, make the military a highly desirable customer for new technologies. Given that AI startups, in particular, need to secure large and patient investments, the shift to military financing may have been inevitable. But this does not explain the speed of the transformation nor the fact that all the leading US AI research laboratories have moved in the same direction.
The past few years have dramatically changed the landscape of capitalist competition – from one guided by neoliberal free market ideals to one saturated with geopolitical concerns. To understand the shift from neoliberalism to geopolitics, one must examine the relationships between states and their Big Tech companies. Such state-capitalist relations were central to earlier formations of imperialism – Lenin famously described imperialism in his day as a fusion of monopoly capital and great powers – and remained influential throughout the twentieth century. In recent decades, this has taken the form of a broad consensus among the technological and political elite about the role of digital technology in innovation, growth, and state power.
But in recent years, this harmony of interests among elite groups has collapsed. A series of overlapping processes, which gained particular momentum in the 2000s, has dismantled this order, leaving behind parts of potential new arrangements in both the United States and China.
Until about the mid-2000s, the United States was dominated by what might be called the Silicon Valley consensus. Here there was broad agreement between the political elite and the technology elite about the role of technology in the world, about what was needed to allow that technology to flourish, about the purported American values it embodied, and about the requirements for capital accumulation in the technology sector. For both the technological elite and the political establishment, globalized communications, capital, data, and technology served their interests.
The Silicon Valley consensus was accepted by tech and political elites because it was a belief in the power of technology to create a US-led world of borderless trade and data. While the tech sector may have (initially) had more utopian motives than the country’s strict geopolitical realism, both can see their joint projects realized through the same means.
In practice, this has meant a free hand for the technology sector, with regulation either conspicuously absent or surprisingly easy. Deregulation was of course a key element of the broader neoliberal period, but it particularly applied to technology companies with their ability to disrupt existing regulatory categories and “disrupt” existing rules. The lack of any significant federal privacy laws or procedures regarding the status of gig economy workers is indicative of this widespread desire to let digital companies do as they please. Under President Bill Clinton, the Global E-Commerce Framework established policies that: According to international studies professor Henry FarrellIt succeeded in “discouraging policymakers from seeking to tax or regulate” the digital economy – turning instead to voluntary, industry-led regulation. The basic belief here – a belief that remains true to this day – was that any regulation would simply get in the way of the innovation and expansion of American technology and power.