Sam Altman: OpenAI could quit EU over AI regulations

Sam Altman: OpenAI will quit EU if unable to comply with bloc's AI regulations

Image:
Sam Altman: OpenAI will quit EU if unable to comply with bloc's AI regulations

'The current draft of the EU AI Act would be over-regulating' Alman said, having previously argued for more regulation in the US Congress

Sam Altman, founder and CEO of OpenAI, the company behind ChatGPT, has warned that his company may leave the EU if it becomes impracticable to meet the bloc's regulatory requirements on AI.

Altman provided detailed insights into the company's stance during a Wednesday visit to London as part of a global tour centred on AI regulation.

According to reports, Altman has set his sights on visiting a total of 17 cities during this extensive trip.

"The current draft of the EU AI Act would be over-regulating, but we have heard it's going to get pulled back," he told Reuters.

"They are still talking about it."

The EU is presently engaged in the development of potentially ground-breaking regulations that could become the world's first comprehensive set of rules governing AI. As part of this rule-making process, the EU is actively pursuing the implementation of transparency measures specifically for general-purpose AI.

Earlier this month, EU officials introduced an updated version of the draft legislation, incorporating additional regulatory requirements.

These requirements extend to companies like OpenAI, which are at the forefront of developing fundamental AI models. By including such provisions, the EU aims to ensure responsible and accountable AI practices within the industry.

The EU's AI Act, as currently proposed, includes provisions that require developers of foundational AI models to identify and address potential risks associated with their products. Additionally, advanced models would need to comply with specific requirements pertaining to their design and information disclosure.

Companies falling under the purview of these regulations would be obligated to register their AI work in a dedicated EU database. Furthermore, companies would be required to disclose whether their AI models have been trained on copyrighted data.

The legislation also includes rules that would necessitate companies to appropriately label AI-generated content as such, preventing any misleading presentation. Additionally, measures would be put in place to prevent the generation of illegal content by AI systems, fostering responsible and compliant practices within the industry.

"Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for training," the European Parliament noted on May 11.

The draft bill will undergo a thorough deliberation process involving representatives from the European Parliament, Council and the Commission.

It is anticipated that the law will come into force in 2025.

According to Altman, OpenAI will make efforts to adhere to the regulations once they are established.

"There's so much they could do, like changing the definition of general purpose AI systems," Altman said. "There's a lot of things that could be done."

Altman is said to be unhappy with how the EU has defined "high-risk" systems in the proposed AI legislation.

According to him, both ChatGPT and the upcoming large language model GPT-4 could be designated high-risk as per the current draft. As a result, OpenAI would be obligated to meet specific requirements applicable to such systems.

In recent engagements with US lawmakers, Altman expressed his support for regulations that could potentially involve new safety requirements and the establishment of a governing agency responsible for testing products and ensuring compliance with regulations.

The European Union's efforts to regulate foundation models have not only caught the attention of OpenAI but also Google.

In a meeting with EU officials on Wednesday, Google's CEO, Sundar Pichai, engaged in discussions surrounding AI policy.

During the meeting, Pichai is said to have highlighted the importance of implementing regulations that strike a balance between ensuring accountability and not stifling innovation.