OpenAI and Google Workers file a friend-of-the-court brief in support of humanitarian actions against the US government


More than 30 Employees from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief on Monday in support of Anthropic in its project. Legal battle against the US government.

“If allowed to proceed, this effort to penalize a leading U.S. AI company will undoubtedly have consequences for U.S. industrial and scientific competitiveness in AI and beyond,” the staff wrote.

The brief was filed just hours after Anthropic File a lawsuit against the Ministry of Defense and other federal agencies over the Pentagon’s decision to designate the company as a “supply chain risk.” The sanction, which severely limits Anthropic’s ability to work with military contractors, took effect after Anthropic’s negotiations with the Pentagon collapsed. The AI ​​startup is seeking a temporary restraining order to continue its work with military partners as the lawsuit progresses. This brief supports precisely this movement.

Signatories of the summary include Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, as well as OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak, among others. Amicus briefs are legal filings submitted by parties who are not directly involved in a court case but who have relevant experience with it. The employees signed in their personal capacities and do not represent the views of their companies, according to the summary.

OpenAI and Google did not immediately respond to WIRED’s request for comment.

The Pentagon’s decision to blacklist Anthropic “creates unpredictability in its industry, undermining American innovation and competitiveness” and “raises professional debate about the benefits and risks of frontier AI systems,” the friend-of-the-court brief says. The report notes that the Pentagon could have simply dropped the Anthropic contract if it no longer wanted to abide by its terms.

The brief also says that the red lines Anthropy claims to have asked for, including not using its AI for mass domestic surveillance and developing lethal autonomous weapons, are legitimate concerns and require adequate guardrails. “In the absence of common law, the contractual and technological requirements that AI developers impose on the use of their systems represent a vital safeguard against catastrophic misuse,” the memo said.

Several other AI leaders have also publicly questioned the Pentagon’s decision to classify Anthropic as a supply chain risk. Sam Altman, CEO of OpenAI, said in a… mail on social media that “imposing an SCR (supply chain risk) rating on Anthropists would be very bad for our industry and our country.” He added, “This is a very bad decision by the Ministry of Labor, and I hope they will retract it.” As Anthropic’s relationship with the Pentagon soured, OpenAI quickly signed its own contract with the US military, a decision that some people took issue with. It has been criticized as opportunistic.

Leave a Reply

Your email address will not be published. Required fields are marked *