Experts Criticize Google for Sparse AI Safety Report

Google’s recent technical report on its Gemini 2.5 Pro AI model has faced criticism from experts for lacking critical safety details. Released weeks after the model’s launch, the report does not mention Google’s Frontier Safety Framework (FSF), introduced to identify AI capabilities that could cause severe harm. Experts argue that the sparse information makes it challenging to assess the model’s safety and security. Furthermore, Google has yet to release a report for its newer Gemini 2.5 Flash model, raising concerns about transparency and commitment to safety evaluations. This comes amid broader industry trends where companies like Meta and OpenAI have also been criticized for insufficient safety documentation. — new from TechCrunch

Leave a Reply

Your email address will not be published. Required fields are marked *