Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

California Expert Group issues recommendations to regulate AI


Summary

A group of academic luminaries formed by governor Gavin News last year won praise for their recommendations for AI policy.

A group of luminaries of artificial intelligence, convened by governor Gavin Newpom, issued what is expected to be an influential set of recommendations on Tuesday, pushing state MPs to bring more transparency to the way AI models are made and work.

The proposed steps will lead to progress both in innovation and public trust, academic experts write in their ReportSupporting the experimentation of the state balance with fuses protecting AI.

Newsom formed the group last fall like him Imposed a prominent AI regulation bill, Arguing the measure will limit innovation.

Founded as a joint working group for California Policy on AI’s Frontier Modeling Policy, the Group offered legislators:

  • Encourage companies that make up modern AI models to reveal risks and vulnerabilities for developers who make their own versions of models.
  • Rate the extended AI models using an independent country.
  • Consider the introduction of signaling rules.
  • Assess the possible need for a system to inform the government when private companies develop AI with dangerous capabilities.

Scott Wiener, the democratic senator who is veto Bill News, praised the report and said it could affect a reduced version of his measure, known as the Senate Bill 53.

“The recommendations in this report are a careful balance between the need for precautions and the need to support innovation,” he wrote in a statement shared with Calmatters. “My office is considering which recommendations can be included in SB 53 and I invite all relevant stakeholders to commit to us in this process.”

The project of the report does not argue about or against any specific legislative act that is currently being considered but can have a strong influence on the 30 Accounts to adjust artificial intelligence Now before the legislature. These measures include approximately half a dozen accounts to Address how AI raises the cost of goods and others who seek to soften How technology affects the environment, public health and increasing energyS Another bill would Require businesses to disclose when AI is used to make important decisions about people’s livesS Business groups Lobbie strongly against such provisions in the last sessionS

The project of the report pointed out the AI ​​rules on books in places such as Brazil, China and the European Union. It states that the rules of California will play a unique and powerful role because of its position as the home of many large companies for AI and research institutions.

“Without proper precautions … Powerful AI can cause serious and in some cases potentially irreversible damage,” the project report said. “Just as the technology of California leads innovation, its management can also set an example of tracking with the impact worldwide.”

Public members have until April 8 to Comment and share feedback Before the recommendations are expected to be finalized this summer.

State Senator Scott Wiener spoke at a press conference in San Francisco on January 21, 2022. Photo of Carl Mondon, News Group of Bay Area News
State Senator Scott Wiener spoke at a press conference in San Francisco on January 21, 2022. Photo of Carl Mondon, News Group of Bay Area News

The authors of the report include Mariano-Florentino Cuéllar, President of Carnegie Endowment for International Peace; Jennifer Tour Chayes, Dean of UC Berkeley College of Computing, Data Science and Society, and Fei-Fei Li, a former AI scientist on Google Cloud and the creator of the AI ​​Pioneer Project, known as Imagenet. Is it usually called the Bishop of AI and its perspectives are sought by the members of the Congress and the Biden administration.

The group focuses on “Frontier models”, the most avant-garde artificial intelligence, such as Openai’s Chatgpt, which dates back to the end of 2022 and R1, a newer model from Chinese company Deepseek. California -based companies, including anthropic, Google and XAI, also develop modern general -purpose AI systems.

Border models have promised improved efficiency as they help Teachers degree of writing tasksBut they also carry risks. They can be used by scammers, allow the spread of misinformation and perpetuates bias. Hype and the fear of border models made the public members consider whether AI can play a role in the human disappearanceS

The report of the report is one of a number of documents presented by California in recent years, including one for benefits and risks of generative AI At the end of 2023 and another for the impact of Generative AI for vulnerable communities At the end of 2024, none of the report was mentioned by the working group.

A representative of technological interests praised the report. Megan Stoks, Director of the State Policy for the Computer and Communications Industry Association, said the working group takes great care of the study of existing laws that protect the Californians from potential damage to AI and to review existing regulatory authorities, helping to ensure that the new provisions are not duplicated. The Stokes Group opposes a bill that would Require developers to disclose the use of copyright of the creator Before training AI model. Copyright violation is a current risk recognized by the Working Group.

Jonathan Mehta Stein, chairman of the technology and democracy initiative of the advocacy group, said that while the project’s project project project contains policy recommendations, he mainly calls on California to wait and see – leaving MPs with few guidance on the best policies. This conclusion risks strengthening the inertia of this legislation aimed at dealing with known, documented damage, he added. His organization, which brought together three bills last year to protect the AI ​​voters, wants the working group to add more legislative recommendations to its final report.

“If California wants to manage AI management and build a digital democracy that works for everyone, it must act and act now,” Stein says in a written statement, “California sits on our hands, as the industry is uncomfortable with regulating, it just means that the industry will be without regulation.

“There’s something for people on both sides.”

Co -founder, Secure, Secrere, Project, Project,

With the speed that the technology is changing, the project’s project is correct to point out that the AI ​​regulation window can be closed soon, said Koji Flyn-Do, co-founder of Secure Ai Project, a group created in December 2024, which previously supported that Wiener AI bill This Newsom vetoed, he said it was a listening to see the report focus on safety and security protocols and testing to mitigate the risks, along with a letter from AI border border companies employees calling for signaling.

“People will say that it goes too far, some people say it doesn’t go far enough and I think there is something about people on both sides,” he said.

The project of the report “It seems like a progress for me,” said Daniel Cocoel, who also approved the AI ​​safety bill proposed by Wiener last yearS He has signed a letter written by current and former employees of companies building border models that require protection to report and a side -on -evening reporting system. Thehe Rightowarn.Ai The letter is cited by the Working Group in the draft report.

“I want to see more specific proposals, and these companies have to do so and these provisions must be accepted, but it is still a progress to talk about these things.”

Leave a Reply

Your email address will not be published. Required fields are marked *