OpenAI supports a bill that would limit liability for mass deaths or financial disasters caused by artificial intelligence


OpenAI is lending its support to an Illinois state bill that would protect AI labs from liability in cases where… Artificial intelligence models Used to cause serious societal harm, such as death or serious injury to 100 or more people or at least $1 billion in property damage.

This effort appears to be transformative OpenAI Legislative strategy. So far, OpenAI has largely played a defensive role, opposing bills that could have been introduced AI labs are linkable To the damage of their technology. Several AI policy experts told WIRED that SB 3444 — which could set a new standard for the industry — is a more extreme measure than bills OpenAI has supported in the past.

The bill, SB 3444, would protect frontier AI developers from liability for “substantial harm” caused by their frontier models as long as they do not intentionally or negligently cause such an incident and publish safety, security, and transparency reports on their website. He defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely applies to America’s largest AI labs such as OpenAI, Google, xAI, Anthropic, and Meta.

“We support approaches like this because they focus on what is most important: reducing the risk of serious harm from the most advanced AI systems while allowing this technology to get into the hands of people and businesses — small and large — in Illinois,” Jamie Radice, an OpenAI spokesman, said in an email statement. “It also helps avoid a patchwork of country-specific rules and move toward clearer and more consistent national standards.”

Under its definition of serious harm, the bill lists some common areas of concern for the AI ​​industry, such as a bad actor using AI to… Create a chemicalbiological, Radiological or nuclear weapon. If the AI ​​model engages in behavior of its own accord, which, if committed by a human, would constitute a criminal offense and lead to those extreme outcomes, that would also be serious harm. If an AI model commits any of these actions under SB 3444, the AI ​​lab behind the model may not be held liable, as long as it was not intentional and they publish their reports.

Federal and state legislatures in the United States have not yet passed any laws that specifically specify whether developers of AI models, such as OpenAI, can be held liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models, they raise new safety and cybersecurity challenges, such as Anthropist Claude MythosThese questions seem increasingly prescient.

In her testimony in support of SB 3444, OpenAI Global Affairs Team member Caitlin Niedermeier also argued in favor of a federal framework for regulating AI. Niedermayer sent a message consistent with that of the Trump administration Crackdown on state AI safety lawsclaiming that it is important to avoid “a patchwork of inconsistent state requirements that can create friction without measurably improving safety.” This is also consistent with the broader view of Silicon Valley in recent years, which has generally been argued to be of paramount importance to… AI legislation does not hinder America’s place in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that such laws could be effective if they “promote a path toward coordination with federal regulations.”

“At OpenAI, we believe the north star in border regulation should be the safe deployment of the most advanced models in a way that also maintains U.S. leadership in innovation,” Niedermayer said.

Scott Weiser, policy director at the Secure AI Project, tells WIRED he thinks this bill has little chance of passing, given Illinois’ reputation for aggressively regulating technology. “We surveyed people in Illinois and asked them whether they thought AI companies should be exempt from liability, and 90% of people were against that,” Wisor says. “There is no reason why existing AI companies should face reduced liability.”

Leave a Reply

Your email address will not be published. Required fields are marked *