Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

The author of the book “SB 1047” in California, the most intelligent intelligence invoice in the country in 2024, returned with a new bill for Amnesty International that could shake the Silicon Valley.
Sinator presented California State Scott Winner A. A new bill On Friday, it would protect employees in leading artificial intelligence laboratories, allowing them to speak if they believe that the artificial intelligence systems of their company may be a “very important danger” for society. The new draft law, SB 53, will create a general cloud computing group, called Calcompute, to give researchers and startups computing resources necessary to develop artificial intelligence that benefits the public.
The last Bill in the Wiener International Bill, SB 1047 in California, has sparked a vital discussion throughout the country on how to deal with huge artificial intelligence systems that could cause disasters. SB 1047 aims to Preventing the possibility of creating Amnesty International models, very large, catastrophic eventsSuch as causing a loss of lives or electronic attacks that cost more than $ 500 million of damage. However, the ruler Gavin New Rohr was eventually the law in September, saying SB 1047 was not the best way.
But the debate about SB 1047 has turned ugly. Some Silicon Valley leaders said SB 1047 would harm America’s competitive advantage In the global artificial intelligence race, he claimed that the draft law is inspired by unrealistic concerns that artificial intelligence systems can lead to the scenarios of the Day of Resurrection. Meanwhile, Senator Winner claimed that some of the investment capital had participated in a A “advertising campaign” against his billPartially, indicating Y Combinator’s claim that SB 1047 will send the startup founders to prison, experts have argued that he is misleading.
SB 53 mainly takes the less controversial parts of the SB 1047 – such as protecting those informed of violations and creating a collection like a compound – and re -filling them in a new bill of artificial intelligence.
It is worth noting that Wiener does not move away from the risks of existential intelligence in SB 53. The new draft law specifically protects those who are informed who believe that their employers create artificial intelligence systems that pose a “very important danger.” The draft law defines the decisive risks as a “The expected or physical risks that the development, storage or deployment of the foundation model, as specified, leads to a serious death or injury to more than 100 people or more dollars in damage to money or property. “
SB 53 of FRONTIER AI – most likely including Openai, anthropic, and Xai, is among others – from revenge on employees who reveal information to the Public Prosecutor in California, federal authorities or other employees. Under the draft law, these developers will be asked to submit a report to violations on certain internal operations that are found by those whose violations are found.
As for Calcompute, SB 53 would create a group to build a general cloud computing collection. The group will consist of the University of the University of California, as well as researchers from the other public and private sectors. It would make recommendations on how to build Calcompute, how much the mass should be, and any users and institutions should have access to it.
Of course, it is very early in the SB 53 legislative process. The draft law must be reviewed and approved by the legislative bodies in California before it reaches the office of the Governor of Newsom. Certainly, legislators in the state are waiting for the Silicon Valley’s reaction to SB 53.
However, the year 2025 may be a stronger year for passing the intelligence intelligence bills compared to 2024. California has passed 18 invisions related to Acting in 2024, but now It seems as if the Doom AI movement has lost the Earth.
Vice President JD Vance at the AI AI Paris summit indicated that America is not interested in the integrity of artificial intelligence, but it gives priority to innovation from artificial intelligence. While the Calcolbut group created by SB 53 can definitely be seen as the progress of AI, it is unclear how legislative efforts on the risks of artificial intelligence will be in 2025.