OpenAI CEO encourages AI regulation in Senate hearing

OpenAI CEO warns of need for AI regulation

Image:
OpenAI CEO warns of need for AI regulation

Sam Altman said the U.S. “might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities,” but remained light on the detail of how this could be achieved.

Yesterday, Sam Altman testified before members of a Senate subcommittee for the first time and appeared largely convinced of the case for regulating the large language models being developed by his company and others, so much so that at times it looked as if he was trying to spur lawmakers to act sooner rather than later.

"I think if this technology goes wrong, it can go quite wrong," Altman said." And we want to be vocal about that. We want to work with the government to prevent that from happening."

Generative AI development is barely out of the headlines, as the tech giants find billions of dollars to throw at its development yet seem to struggle far more to fund and retain the accompanying ethics teams. Earlier this month, Joe Biden met CEOs of Google, Microsoft as well as OpenAI to emphasise that the technology had immense potential for both good and bad, and that those developing the technology had a responsibility to aim for the former. Google's CEO, Sundar Pichai warned of the need for regulation last month.

Mr. Altman was joined at the hearing by Christina Montgomery, IBM's chief privacy and trust officer, and the scientist Gary Marcus who has warned consistently of the potential of an AI free for all to create some very dark outcomes.

Altman acknowledged the changes to the job market that his technology was likely to bring about in terms of making some jobs redundant but creating new ones that we might not even have imagined yet and recommended that government get involved in that process - although what exactly he thought government could or should do to mitigate these changes was unclear.

He did however suggest creating a government-controlled agency which licenses large language AI based on certain safety regulations and standards, and that this should all be done in advance of the public getting their hands on the technology.

Dr Marcus has expressed concerns about the ability of large language AI to disseminate misinformation on an industrial scale, making the damage already incurred to democracies around the world by the targeting of voters via social media platforms look like a warm-up act for the main event. However, Altman stated his belief that people will learn quickly to recognise AI generated untruths.

"When Photoshop came onto the scene a long time ago, for a while people were really quite fooled by photoshopped images and then pretty quickly developed an understanding that images were photoshopped," he said. "This will be like that, but on steroids."

Whilst it was reported that the Senators involved seemed to have a reasonable grasp of some of the potential of AI, the session should be viewed in the light of the poor track record Congress has had in regulating the tech giants. Initiatives to bolster privacy and safety legislation in the US has typically been killed off by a pincer movement of tech lobbying and congressional partisanship.

Meanwhile, the EU is due to vote on an AI Act later this year, which should become law in 2024. The bill will classify AI systems according to four levels of risk ranging from minimal to unacceptable. Higher risk categories will be subject to some tough criteria around data quality, accuracy and transparency. Unacceptable risk means an outright ban on use.