The group led by Fei-Fei Li suggests that AI safety laws must foresee future risks

Rate this post


In a new report based in California Policy, led by FEI-Fei Li, AI pioneer, suggests that legislators should take into account the risks of AI that “have not yet been observed in the world” in the establishment of AI regulatory policies.

Thehe Intermediate report on 41 pages Issued on Tuesday comes from the joint working group for California Policy on AI Border Models, Efforts organized by Governor Gavin News after Its veto of the controversial AI safety bill in California, SB 1047S While Newsom discovered this SB 1047 missed the brandHe acknowledged last year the need for a more dense assessment of AI the risks of informing lawmakers.

In the LI report, along with co-authors UC Berkeley College of Computing dean Jennifer Chayes and Carnegie Endowment for International Peace President Mariano-Florentino Quelar, argue in favor of the laws that would increase transparency in what transparency. Stakeholders of the industry from the entire ideological spectrum have reviewed the report before its publication, including unwavering AI safety defenders as the Turing Joshua Benjio Award, as well as those who are arguing against the SB 1047 as the co -founder of Databricks Ion Stoica.

According to the report, the novel risks caused by AI systems may impose laws that would force AI developers to publicly report their safety tests, data collection practices and security measures. The report also advocates for increased standards surrounding third -country assessments of these indicators and corporate policies, in addition to the extended protection of AI’s AI officials and contractors.

Li et al. Write that there is an “unconvincing level of evidence” about AI’s potential to help perform cyberattacks, create biological weapons, or have other “extreme” threats. However, they also claim that AI’s policy should not only deal with the current risks, but predicts future consequences that can occur without sufficient precautions.

“For example, we do not need to monitor nuclear weapons (exploding) to reliably predict that it may cause major damage,” the report said. “If those who speculate about the most end risks are correct – and we are not sure if they will be – then the bets and the costs of inaction of the border II are extremely high at this moment.”

The report recommends a bilateral strategy to increase the transparency of the AI ​​model development: Trust, but check. Developers of AI models and their employees must be provided with paths to account for areas of public importance, the report said, such as internal safety testing, while requiring to submit claims for third -party testing.

While the report, the final version of which is forthcoming in June 2025, does not approve of specific legislation, it is well accepted by experts from both sides of the debate to set up AI policy.

Dean Ball, focused on II research associate at George Mason University, who was critical of the SB 1047, said in an X publication that the report was Promising step For California AI safety regulation. This is also a victory for AI safety defenders, according to California Senator Scott Wiener, who introduced the SB 1047 last year. Wiener said in a press release that the report was based on “emergency conversations surrounding AI’s government, we started in the legislature (in 2024).”

The report appears to be aligned with several components of SB 1047 and the subsequent Wiener bill, Sb 53Such as a requirement for AI model developers to take into account the results of safety tests. Taking a broader look whose agenda has lost a position in the last yearS

 
Report

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *