Please enable JavaScript to access this page.
Business News

Google’s latest AI model is missing a key safety report in apparent violation of promises made to the U.S. government and at international summits

GettyImages 2197414186 e1744197798174

The latest AI, GEMINI 2.5 Pro, has been released without a major safety report – a step that the critics say the carrier violates the general obligations made by the company to the United States government and at an international summit last year.

It was launched in late March, the company has Description of the latest Gemini The model as the most capable yet, claims to “lead common standards with meaningful margins.”

But Google’s failure to issue the accompanying safety research report – also known as “System Card” or “Form Card” – earns the previous obligations made by the company.

At a meeting in July 2023, which was held by the then Joe Biden President at the White House, Google was among a number of leading companies in artificial intelligence that signed a series of obligations, including a pledge to publish reports for all the most powerful most powerful typical versions of the current art of art at the time. Google 2.5 Pro is definitely considered “in the scope” of these White House obligations.

At that time, Google Agree on reports “Safety assessments conducted (including in areas such as dangerous capabilities should include the extent that this responsibility is publicly detected), great restrictions in performance that have effects on the areas of appropriate use, discussing the effects of the model on societal risks such as fairness and bias, and the results of the sharp tests that are made to evaluate the model in its publication.”

After the meeting of the Group of Seven in Hiroshima, Japan, in October 2023, Google was among the companies that committed to compliance with voluntary rules of behavior on developing advanced artificial intelligence. The G7 code includes a commitment to “reporting the potential, restrictions of the advanced artificial intelligence systems and the appropriate and inappropriate use fields, to support sufficient transparency guarantee,
Thus contributing to increasing accountability. “

After that, at an international summit on the integrity of the artificial intelligence held in Seoul, South Korea, in May 2024, the company launched similar promises – obligated to publicly reveal the typical capabilities, restrictions, appropriate inappropriate inappropriate use, and providing transparency on risk assessments and risk.

In an e -mail statement, Google spokesman DeepmindThe Google section is responsible for developing Google Gemini models, luck The most recent Gemini was undergoing test before the release, including internal development assessments and guarantee assessments that were conducted before issuing the form. luck On April 2, since then no typical card has been published.

Technology companies retreat from promises, and experts fear

Google is not the only person who faces scrutiny because of his commitment to the integrity of artificial intelligence. Earlier this year, Openai also failed to release a model card in a timely manner for its deep search model, instead published a system card weeks after the project was initially released. The last safety report in Llama 4 Be criticized To lack length and details.

“These failures by some of the major laboratories that are to keep abreast of their safety with their actual typical versions are especially disappointing given that the same companies voluntarily adhered to the United States and the world to produce these reports – first in the obligations that depend on the Biden administration in 2023, then through similar pledges in 2024 to expand the AI ​​scope for the behavior that was adopted on G7. A consultant in the governance of artificial intelligence at the Center for Democracy and Technology said, luck.

Information on government safety assessments is also absent

Google’s spokesperson also failed to address questions related to whether Google 2.5 Pro has been presented to the external evaluation by the UK Security Institute or the United States Safety Institute.

Google has previously provided previous generations of Gemini models to the UK’s Evaluation Institute for Safety Institute.

At the Seoul Safety Summit summit, Google participated in “Frontier AI” that included a pledge to “provide public transparency to implement” safety assessments “, except for that” would increase the risks or detect sensitive commercial information to an inappropriate degree with societal interest. ” The countries on which companies are based, which will be the United States in the Google case.

Companies also committed to “explaining how, at all, external actors, such as governments, civil society, academics and the public in the process of assessing the risk of their artificial intelligence models.” Google may also violate this commitment by not answering direct questions about whether Gemini 2.5 Pro has provided either to evaluate the United States or UK government.

Publishing on transparency

The missing safety report has pushed into fears that Google may prefer to spread transparency.

Sandra Washer, professor and chief researcher at the Oxford Institute of the Internet, said, luck. “If this is a car or plane, we will not say: Let’s offer this to the market as quickly as possible and we will look at the safety aspects later. While there is a position on the Importer IQ, there is a position on putting this there, anxiety, investigation, and repairing problems with them later.”

Recent political changes, along with the increasing competition between major technology companies that cause some companies to return to previous safety obligations during their race to spread artificial intelligence models.

Watcher said: “The pressure point of these companies is to be faster, being faster, being the first, being the best, as they are dominant, are more prevalent than they were in the past,” adding that safety standards were located across the industry. These sliding standards can be increasingly concerned between technical countries and some governments hindered by artificial intelligence safety procedures.

In the United States, the Trump administration indicated that it plans to approach artificial intelligence with a much lighter touch than the Biden administration. The new government has already eliminated an executive during the Biden era of artificial intelligence and was rising to technology leaders. At the last artificial intelligence summit in Paris, US Vice President JD Vance said that “artificial intelligence policies supporting growth” should be prioritized for safety and that artificial intelligence “is an opportunity that will not dispel the Trump administration.”

At the summit, the United Kingdom and the United States refused to sign an international agreement on artificial intelligence That pledged The “open”, “comprehensive” and “moral” approach to the development of technology.

“If we cannot rely on these companies to fulfill even the basic safety and transparency obligations when issuing new models – the obligations they made themselves voluntarily – they make models clearly very quickly in their competitive impulsivity to control this field.”

He added: “Especially to the extent that artificial intelligence developers continue to stumble in these obligations, it will be possible for lawmakers to develop and enforce the requirements of clear transparency that companies cannot head.”

This story was originally shown on Fortune.com


https://fortune.com/img-assets/wp-content/uploads/2025/04/GettyImages-2197414186-e1744197798174.jpg?resize=1200,600
2025-04-09 12:19:00

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button