Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Anthropy has come out against a Proposed Illinois law Powered by OpenAI, which would protect AI labs from liability if their systems are used to cause large-scale harm, such as mass casualties or more than $1 billion in property damage.
The fight over the state bill, SB 3444, draws new battle lines between Anthropic and OpenAI over how AI technologies will be used Should be regulated. While AI policy experts say the legislation has little chance of becoming law, it nonetheless exposed political divisions between two of the nation’s leading AI labs that could become increasingly important as rival companies ramp up lobbying activity across the country.
Behind the scenes, Anthropic has been lobbying state Sen. Bill Cunningham, sponsor of SB 3444, and other Illinois lawmakers to either make major changes to the bill or stop it as is, according to people familiar with the matter. In an email to WIRED, a spokesperson for Anthropic confirmed the company’s opposition to SB 3444, and said it had promising conversations with Cunningham about using the bill as a starting point for future AI legislation.
“We oppose this bill,” Cesar Fernandez, head of U.S. state and local government relations at Anthropic, said in a statement. “Good transparency legislation should ensure public safety and accountability for companies developing this powerful technology, not provide a get-out-of-jail-free card against any liability.” “We know that Senator Cunningham cares deeply about AI safety and we look forward to working with him on changes that will instead link transparency with true accountability to mitigate the most serious harms that frontier AI systems can cause.”
Cunningham’s representatives did not respond to a request for comment. A spokesperson for Illinois Governor J.B. Pritzker sent the following statement: “While the Governor’s Office will monitor and review many of the AI bills moving through the General Assembly, Governor Pritzker does not believe big tech companies should be given a complete shield that shirks the responsibilities they should have to protect the public interest.”
The crux of the dispute between OpenAI and Anthropic over SB 3444 comes down to who should be held liable in the event of an AI-powered disaster — a potential nightmare scenario that U.S. lawmakers have only recently begun to confront. If SB 3444 passes, an AI laboratory would not be liable if a bad actor used its AI model, for example, to create a bioweapon that killed hundreds of people, as long as the laboratory drafted its own safety framework and posted it on its website.
OpenAI Argue That SB 3444 reduces the risk of serious harm from frontier AI systems while “still allowing this technology to get into the hands of people and businesses — small and large — in Illinois.”
The maker of ChatGPT says it has worked with states like New York and California to create a so-called “coordinated” approach to regulating AI. “In the absence of federal action, we will continue to work with states — including Illinois — to work toward a consistent safety framework,” Liz Bourgeois, an OpenAI spokeswoman, said in a statement. “We hope these state laws will form a national framework that helps ensure the United States continues to lead.”
On the other hand, Anthropic argues that companies developing frontier AI models should be at least partially held responsible if their technology is used to cause widespread societal harm.
Some experts say the bill would dismantle existing regulations meant to deter companies from behaving badly. “Liability already exists under common law and provides a strong incentive for AI companies to take reasonable steps to prevent foreseeable risks from their AI systems,” says Thomas Woodside, co-founder and senior policy analyst at the Secure AI Project, a nonprofit that has helped develop and defend AI safety laws in California and New York. “SB 3444 would take the extreme step of virtually eliminating liability for serious damages. But it’s a bad idea to weaken liability, which in most states is the most important form of legal accountability already in place for AI companies.”