Big tech firms propose model to oversee advanced AI

Big tech group to oversee advanced AI models

Image:
Big tech group to oversee advanced AI models

The Frontier Model Forum coalition endeavours to create 'safe and responsible' models by leveraging the collective expertise of member companies

Microsoft, Google, OpenAI and Anthropic, four of the most influential players in AI technology, on Wednesday announced the formation of an industry body dedicated to setting best practices for managing the most advanced AI models.

Called Frontier Model Forum, the initiative aims to ensure the safe and responsible development of frontier AI models.

"Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," said Brad Smith, the president of Microsoft.

"This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity."

Over the past few months, many companies have introduced powerful AI tools capable of generating original content in various forms, such as images, text, or videos, by using a repository of existing material.

These advancements have sparked worries regarding potential privacy breaches, copyright violations, and the possibility of AI eventually replacing humans in various job roles.

In a joint statement, the four companies claimed that the Frontier Model Forum will leverage the technical and operational expertise of its member companies for the betterment of the entire AI ecosystem.

They say this will be achieved through means including advancing technical evaluations and benchmarks, as well as creating a public library of solutions to promote industry best practices and standards.

The Forum has set out the following objectives:

The group membership is open to organisations involved in the development of frontier models, which are defined as "large-scale machine-learning models" that surpass the capabilities of the most advanced existing models and can perform a diverse range of tasks.

The forum will create an advisory board to provide guidance and direction in shaping its strategy and priorities.

The announcement comes as critics demand measures to regulate AI technology.

The US Federal Trade Commission (FTC) has initiated an investigation into OpenAI, examining whether the company has been involved in any "unfair or deceptive" privacy and data security practices or if it has caused harm to individuals by generating false information about them.

Last week, the White House introduced an interim measure that involves tech companies making voluntary commitments to ensure the "safe, secure, and transparent development and use of AI technology."

Following a meeting with President Joe Biden, Amazon, Anthropic, Google, Meta, Microsoft, Inflection and OpenAI reached an agreement to "prioritise research on societal risks posed by AI systems" and encourage third-party discovery and reporting of issues and vulnerabilities in AI systems.

The announcement faced scepticism from some campaigners who pointed out that the tech industry has a track record of not living up to its promises on self-regulation.

"The elephant in the room here is that the United States continues to push forward with voluntary measures, whereas the European Union will pass the most comprehensive piece of AI legislation that we've seen to date," Brandie Nonnecke, founding director of UC Berkeley's Citris Policy Lab, told Tech Brew.

"[These companies] want to be there in helping to essentially develop the test by which they will be graded," Nonnecke said.

Emily Bender, a computational linguist at the University of Washington, criticised companies' reassurances, saying their statements appeared to be an effort to evade regulation and "to assert the ability to self-regulate, which I'm very sceptical of."

"We really shouldn't have the government compromising with companies," she said.

"The government should act in the public interest and regulate."