The new AI regulation gives Californians a rare inside look at the development


from Harry JohnsonCalMatters

"A
A new California law requires tech companies to disclose how they manage catastrophic risks from artificial intelligence systems. The Dreamforce conference hosted by Salesforce in San Francisco on September 18, 2024. Photo by Florence Middleton for CalMatters

This story was originally published by CalMatters. Sign up for their newsletters.

Tech companies that build large, sophisticated AI models will soon have to share more information about how the models can affect society and give their employees ways to warn the rest of us if things go wrong.

As of Jan. 1, a law signed by Gov. Gavin Newsom gives protection of employees from whistleblowers at companies like Google and OpenAI, whose work includes assessing the risk of critical safety incidents. It also requires major AI model developers to publish frameworks on their websites that include how the company responds to critical safety incidents and assesses and manages catastrophic risk. Fines for violating the limits can reach $1 million per violation. Under the law, companies must report critical safety-related incidents to the state within 15 days, or within 24 hours if they believe the risk poses an immediate threat of death or injury.

The law began as Senate Bill 53with author state Sen. Scott WienerDemocrat from San Francisco, to address the catastrophic risk posed by advanced AI models, sometimes called frontier models. The law defines catastrophic risk as a case where the technology could kill more than 50 people through a cyber attack or injure people with a chemical, biological, radiological or nuclear weapon, or a case where the use of AI results in theft or damage of more than $1 billion. It examines the risks in the context of an operator losing control of an AI system, for example because the AI ​​has tricked them or taken independent action, situations that are largely considered hypothetical.

The law increases the information AI makers must share with the public, including in a transparency report that must include a model’s intended uses, limitations or conditions on a model’s use, how a company assesses and addresses catastrophic risk, and whether those efforts have been reviewed by an independent third party.

The law will bring much-needed disclosure to the AI ​​industry, said Rishi Bomasani, part of a Stanford University group that tracks transparency around AI. Only three of 13 companies his group recently studied regularly produce incident reports, and the transparency scores his group issues to such companies have declined on average over the past year. according to a newly released report.

Bommasamy was also the lead author of a report commissioned by Gov. Gavin Newsom that heavily influenced SB 53 and calls transparency key to public trust in AI. He believes that the effectiveness of SB 53 depends largely on the government agencies tasked with implementing it and the resources allocated to them to do so.

“You can write any law in theory, but the practical impact of it is very much shaped by how you apply it, how you enforce it and how the company is engaged with it.”

The law was influential even before it took effect. New York Governor Kathy Hochul, credits it as a basis on the AI ​​Transparency and Safety Act, which she signed on Dec. 19. The similarity will grow, City and State of New York reportedas the law will be “substantially rewritten next year largely to align with California’s language.”

Limitations and Enforcement

The new law is insufficient, no matter how well it is enforced, critics say. It does not include in its definition catastrophic risk issues such as the impact of AI systems on the environment, their ability to spread misinformation, or their potential to perpetuate historical systems of oppression such as sexism or racism. The law also does not apply to AI systems used by governments to profile people or assign ratings that could lead to denial of government services or accusations of fraud, and only targets companies making $500 million in annual revenue.

Its transparency measures also fall short of full public visibility. In addition to providing transparency reports, AI developers should also send incident reports to the Emergency Services when things go wrong. Members of the public can also contact this service to report catastrophic incidents.

But the contents of incident reports submitted to OES by companies or their employees cannot be made available to the public through records requests and will instead be shared with members of the California Legislature and Newsom. Even then, they can be edited to hide information that companies characterize as trade secrets, a common way companies prevent information about their AI models from being shared.

Bommasamy hopes that additional transparency will be provided by the Assembly Bill 2013, a bill that became law in 2024 and also takes effect on January 1. It requires companies to disclose additional details about the data they use to train AI models.

Some elements of SB 53 don’t begin until next year. Beginning in 2027, the Office of Emergency Services will produce a report on critical safety-related incidents the agency receives from the public and major frontier model manufacturers. This report may shed more light on the extent to which AI can orchestrate attacks on infrastructure or models can operate without human guidance, but the report will be anonymous, so which AI models pose this threat will not be known to the public.

This article was originally published on CalMatters and is republished under Creative Commons Attribution-NonCommercial-No Derivatives license.

Leave a Reply

Your email address will not be published. Required fields are marked *