Google’s latest AI report missing key safety details, experts say

Rate this post


On Thursday, weeks after launching its most powerful AI model so far, Twins 2.5 ProGoogle publishes a Technical report showing the results of his internal safety estimates. However, the report is a cure for details, experts say, which makes it difficult to determine which risks the model can be.

Technical reports provide useful – and UndoubtedlyAt times – information that companies do not always advertise for their AI. In general, the AI ​​community sees these reports as conscientious efforts to support independent evaluations of research and safety.

Google uses a different approach to account for safety from some of its AI rivals, posting technical reports only after considering that the model has completed the “experimental” stage. The company also does not include findings of all its evaluations of “dangerous ability” in these spellings; He reserves those for a separate audit.

Several experts with whom TechCrunch speaks were still disappointed with the scarcity of the Gemini 2.5 Pro report, which they noted did not mention Google’s Frontier Breate Framework (FSF)S Google introduced FSF last year in what it defined as an effort to identify future AI opportunities that can cause “severe harm”.

“This (report) is very scarce, it contains minimal information and came out weeks after the model has already been provided to the public,” said Peter Wildford, co -founder of the AI ​​Policy and Strategy Institute, before TechCrunch. “It is impossible to check that Google is responsible for its public engagements and thus it is impossible to evaluate the safety and security of their models.”

Thomas Woodside, co -founder of the Secure AI project, said that while he was glad that Google was released on a Gemini 2.5 Pro report, he was not convinced of the company’s commitment to provide timely additional safety assessments. WoodSide said the last time Google published the results of dangerous ability tests was in June 2024 – for a model announced in February of the same year.

Does not inspire much confidence, Google has not provided a report on Twins 2.5 flashA smaller, more effective model that the company announced last week. A spokesman told TechCrunch that the Flash report “comes soon.”

“I hope this is Google’s promise to start publishing more frequent updates,” Woodside told TechCrunch. “These updates should include the results of models assessments that have not yet been publicly deployed, as these models can also be serious risks.”

Google may have been one of the first AI laboratories to offer standardized model reports, but this is not the only one that has been Accused of insufficient transparency Lately. Meta released a Similarly an assessment of safety of his new Llama 4 Open models and Openai decided not to publish any report on it GPT-4.1 SeriesS

Hanging over Google’s head are assurances that the technological giant made to regulators maintains a high standard of AI testing and reporting for AI safety. Two years ago, Google told the US government He will publish safety reports for all “significant” public models of AI “in the range”. The company This promise followed with such commitments yes other countriespromising to “provide public transparency” around AI products.

Kevin Bankston, a senior AI management adviser at the Center for Democracy and Technology, called the trend of sporadic and unclear reports the “race to the bottom” of AI safety.

“Combined with reports that competing laboratories like Openai have shaved their safety testing time before launching months of days, this scarce documentation for the best AI model of Google tells an alarming race to the bottom of AI safety and transparency as companies are in the market.

Google said in statements that although it is not detailed in its technical reports, it conducts safety testing and “racing red equipment” for models before launch.

 
Report

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *