Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

When will we take critics of AI seriously?
This is the main subtext of Elon Musk’s attempt to shut down OpenAI’s for-profit AI business. His lawyers say the organization was founded as a charity focused on the safety of artificial intelligence, and has lost its way in pursuit of profit. To prove this, they cite old emails and statements from the organization’s founders about the need for a counterweight to Google DeepMind.
Today, they called their only expert witness: Peter Russell, a computer science professor at the University of California, Berkeley who has studied artificial intelligence for decades. His job was to provide basic information about artificial intelligence, and to ensure that this technology was dangerous enough to worry about.
Russell co-signed Open letter In March 2023, he called for a six-month moratorium on AI research. In a nod to the inconsistencies here, Musk also signed on to the same message, even as he was launching xAI, his own for-profit AI lab.
Russell told jurors and Judge Yvonne Gonzalez Rodgers that there are a variety of risks associated with developing artificial intelligence, ranging from cybersecurity threats to incompatibility issues and the winner-take-all nature of developing artificial general intelligence (AGI). Ultimately, he said there is a tension between the pursuit of AGI and safety.
Russell’s larger concerns about the existential threats of unfettered AI were not aired in open court after objections from OpenAI’s lawyers led a judge to limit Russell’s testimony. But Russell has long been a critic of the arms race dynamic created by frontier labs around the world vying to get to AGI first, and has called on governments to regulate the field more tightly.
OpenAI’s lawyers spent their questioning ensuring that Russell was not directly evaluating the organization’s corporate structure or its specific safety policies.
TechCrunch event
San Francisco, California
|
October 13-15, 2026
But this reporter (as well as the judge and jury) will consider how much value to place on the relationship between corporate greed and concerns about the safety of artificial intelligence. Almost all of OpenAI’s founders have strongly warned of the risks of AI, while also emphasizing the benefits, trying to build AI as quickly as possible — and making plans for the for-profit, AI-focused enterprises they will control.
From the outside, an obvious problem here is the growing realization within OpenAI after its founding that the organization simply needs to spend more on computing if it wants to succeed. This money can only come from for-profit investors. The founding team’s fear that AGI would fall into the hands of a single organization prompted a search for capital that ultimately tore the team apart, creating the arms race we know today — and bringing us to this lawsuit.
The same dynamic is already playing out at the national level: Sen. Bernie Sanders’s push for legislation Imposing a moratorium In Building the Data Center refers to the AI concerns voiced by Musk, Sam Altman, Geoffrey Hinton and others. Howden Omar, who works at the trade organization the Center for Data Innovation, took issue with Sanders mentioning their fears rather than their hopes, telling TechCrunch that it is “unclear why the public should dismiss everything tech billionaires say except when their words can be enlisted to fill in the gaps in a risky argument.”
Now, both sides of the case are asking the court to do precisely that: to take part of Altman and Musk’s arguments seriously, but to leave out the parts that are less useful to their legal argument.
When you make a purchase through the links in our articles, We may earn a small commission. This does not affect our editorial independence.