The author of SB 1047 presents a new AI bill in California
The author of SB 1047 in California, the most reflective AI safety bill of the nation since 2024, is returning with a new AI bill that can shake the Silicon Valley.
California State Senator Scott Wiener introduced a New bill On Friday, this will protect employees at the leading AI Labs, which allows them to talk if they believe that the AI ​​systems of their company can be a “critical risk” for society. The new bill, SB 53, will also create a public cloud computing cluster called Calcompute to give researchers and launch the necessary calculation resources to develop AI, which is beneficial to the public.
The latest Wiener, SB 1047 bill in California, has caused a lively debate throughout the country how to deal with massive AI systems that can cause disasters. SB 1047 aims to prevent the possibility of very large AI models that create catastrophic eventsas causing a loss of life or cyberattacks costing more than $ 500 million in damage. Gavor Gavin Newo, however, veto the bill in September, saying SB 1047 was not the best approachS
But the debate on the SB 1047 quickly became ugly. Some Silicon Valley leaders said SB 1047 will harm the competitive edge of America In the global AI race, he claims that the bill is inspired by unrealistic fears that AI systems can lead to science -like scripts. Senator Wiener, meanwhile, claims that some risk capitalists have participated in a A “propaganda campaign” against his billPartally indicating Y Combinator’s claim that the SB 1047 will send start -ups to prison, claim that claim experts claim to be misleading.
SB 53 essentially takes the most controversial parts of SB 1047 – by protecting signals and establishing a kalkopten cluster – and repaying them into a new AI bill.
In particular, Wiener does not deviate from the existential AI risk in SB 53. The new bill specifically protects signals that believe that their employers create AI systems that pose a “critical risk”. The bill determines the critical risk as a “A predictable or significant risk that the development, storage or implementation of developers developers will lead to the death of or serious injuries to more than $ 100 or more than $ 1 billion in damage to money or property rights. “
The SB 53 restricts the developers of border AI Model – probably including Openai, Anthropic and XAI, among others, from revenge against employees who reveal information about the California General Prosecutor, Federal Authorities or other officials. According to the bill, these developers will have to report the reports of certain internal processes that the signals find for it.
As for Calcompute, the SB 53 will set up a group for the construction of a public cloud computing cluster. The group will consist of representatives of the University of California, as well as other public and private researchers. This would make recommendations on how to build Calcompute, how big the cluster should be and which users and organizations should have access to it.
Of course, this is very early in the SB 53 legislative process. The bill must be revised and adopted by California’s legislative authorities before reaching the Governor’s desk Newsom. State legislators will surely wait for the SB 53’s Silicon Valley reaction.
However, 2025 may be a more difficult year for accepting AI safety bills than 2024. California accepted 18 accounts related to AI in 2024, but now It seems that Ai Doom’s movement has lost soilS
Vice President JD VANCE signals at the Paris AI summit that America is not interested in AI safety, but rather a priority for YI innovation. While the Calcompute cluster established by SB 53 can certainly be seen as progress in AI, it is unclear how legislative efforts around the existential risk of AI will be able to cope in 2025.