Effective regulation of AI is crucial to prevent harmful outcomes: Sundar Pichai

Google CEO says the downsides of AI give him restless nights

Effective regulation of AI is crucial to prevent harmful outcomes: Sundar Pichai

Image:
Effective regulation of AI is crucial to prevent harmful outcomes: Sundar Pichai

In an interview with CBS, Pichai acknowledged that Google does not fully understand its own generative AI, and called for greater regulation.

Google CEO Sundar Pichai, has expressed worries about the adverse implications of generative AI technology and is advocating for fresh regulations to govern its usage.

In an interview with CBS's "60 Minutes", Mr Pichai admitted that the downsides of AI give him restless nights. He emphasised the importance of avoiding what he referred to as "race conditions," wherein developers of AI products from different companies strive to outdo each other in launching their offerings first.

"It can be very harmful if deployed wrongly and we don't have all the answers there yet - and the technology is moving fast. So does that keep me up at night? Absolutely," he said.

According to Pichai, as AI continues to evolve, governments will need to establish global frameworks to regulate it.

Last month, a letter signed by numerous AI experts, researchers, and supporters, including Elon Musk, requested a halt in the creation of "giant" AIs for a minimum of six months.The letter came amid concerns that the development of the technology could quickly spiral beyond the ability of its creators to understand or control the outcomes.

Google is currently lagging behind its competitors in integrating generative AI into its products. This type of software can produce text, images, music, and even video based on user inputs.

Demonstrations of this technology by ChatGPT and another OpenAI product called Dall-E have highlighted its potential, leading many businesses from Silicon Valley to China to explore and develop their own generative AI offerings.

Pichai pointed out the risks associated with generative AI, including the creation of deepfake videos, which depict individuals saying - or doing - things they did not say or do.

Such risks demonstrate the importance of implementing regulations, Pichai said.

"It will be possible with AI to create, you know, a video easily. Where it could be Scott [Pelley, the CBS interviewer] saying something, or me saying something, and we never said that. And it could look accurate. But you know, on a societal scale it can cause a lot of harm."

When asked if frameworks similar to those used for regulating nuclear arms could be necessary for AI, Pichai said, "We would need that."

Last month, Europol, the EU's law enforcement agency, warned that the rapid advancements in generative AI could assist online fraudsters and become a "crucial criminal business model," in the future.

Examples cited by Europol included the use of AI to create convincing deepfakes as well as chatbots designed to trick people into revealing their personal information through phishing scams.

During the interview, the Google CEO also stated that the current version of their AI technology, accessible to the public via the Bard chatbot, is safe. He claimed that Google is being responsible by refraining from releasing more advanced versions of Bard until they have undergone sufficient testing.

Pichai also acknowledged that Google did not have a full understanding of how its AI technology generates certain responses.

"There is an aspect of this which we call, all of us in the field call it as a 'black box'. You know, you don't fully understand. And you can't quite tell why it said this, or why it got it wrong."

When CBS journalist Scott Pelley asked Pichai why the company had made Bard available to the public despite not fully comprehending how it operates, Mr Pichai replied: "Let me put it this way. I don't think we fully understand how a human mind works either."

Pichai also mentioned that the economic impact of AI would be substantial since it has the potential to affect everything.

He highlighted "knowledge workers" such as writers, accountants, architects, and software engineers as professionals whose jobs could be disrupted by AI.