G7 leaders advocate for global AI standards

Hiroshima AI Process will facilitate discussions on generative AI

G7 leaders advocate for global standards to ensure reliability of AI

Image:
G7 leaders advocate for global standards to ensure reliability of AI

The leaders of the Group of Seven (G7) nations have emphasised the need for the development and adoption of international technical standards to ensure AI trustworthiness, at their meeting in Hiroshima this week.

The leaders of the USA, Japan, Germany, UK, France, Italy and Canada heard varying perspectives on AI regulation, and agreed to establish a ministerial forum known as the Hiroshima AI process.

The forum will facilitate discussions on generative AI and related issues, and is set to commence deliberations by the end of this year.

The G7 leaders stressed that regulations should align with their collective democratic values.

"As the pace of technological evolution accelerates, we affirm the importance to address common governance challenges and to identify potential gaps and fragmentation in global technology governance," the leaders said in a statement [pdf].

"In areas such as AI, immersive technologies such as the metaverses and quantum information science and technology and other emerging technologies, the governance of the digital economy should continue to be updated in line with our shared democratic values.

"These include fairness, accountability, transparency, safety, protection from online harassment, hate and abuse and respect for privacy and human rights, fundamental freedoms and the protection of personal data."

The G7 urged international organisations like the Organisation for Economic Cooperation and Development (OECD) to conduct thorough analyses on the implications of AI for policy developments.

This agreement among G7 countries comes in the wake of an initiative by the EU - also represented at this year's G7 meeting - to introduce legislation aimed at regulating AI.

The legislation could be the world's first comprehensive AI law, setting a precedent for other nations to follow. The EU initiative could serve as a blueprint or model for shaping AI regulations globally.

Recent concerns about the widespread and sometimes unregulated use of AI follow the introduction of OpenAI's ChatGPT last year.

The concerns were further amplified when prominent industry figures and AI experts called for a six-month halt in the development of advanced AI systems, citing potential risks to society.

Last month, Google CEO Sundar Pichai acknowledged that Google does not fully understand its own generative AI, and called for greater regulation.

This week Sam Altman, the CEO of OpenAI, testified before a Senate panel in the USA, where he suggested that the country should contemplate implementing "a combination of licensing and testing requirements" for the development and release of AI models that surpass a certain threshold of capabilities.

"I think if this technology goes wrong, it can go quite wrong," Altman said.

"And we want to be vocal about that. We want to work with the government to prevent that from happening."

Altman proposed the establishment of a government-controlled agency responsible for licensing large language AI models. He also emphasised the importance of setting safety regulations and standards that should be met by these models.

Diverse regulation

At present, different nations have diverse approaches to AI governance.

China, in particular, has implemented a relatively restrictive policy regarding AI. The country seeks to align generative AI-powered services with its core socialist values.

Western nations, including those from the EU and the United States, are scheduled to convene at the Trade and Technology Council in Sweden on 30th-31st May.

The goal is to foster collaboration, share insights and collectively address the challenges and opportunities presented by AI advancements.