UK and USA sign joint agreement on AI safety

But regulatory approaches still differ

UK and USA sign joint agreement on AI safety

The UK and USA have signed a Memorandum of Understanding to co-develop tests for AI models - the first countries in the world to formalise a co-operative agreement on how to assess the risks of artificial intelligence.

The agreement, signed on Easter Monday by technology secretary Michelle Donelan and US commerce secretary Gina Raimondo, sets out how the countries will work together by pooling technical talent, information and knowledge.

The partners plan to build a "common approach" to AI safety testing, performing "at least" one joint testing exercise on a publicly accessible model.

In the UK the work falls under the newly established AI Safety Institute, while a similar body - albeit, not operational yet - will handle it in the USA. The bodies will be able to arrange researcher secondments and work on methods to independently evaluate private AI systems built by tech firms like Google, Oracle and OpenAI.

The partnership between the institutes is modelled after the one between GCHQ and the National Security Agency, which pre-dates World War II.

"We have always been clear that ensuring the safe development of AI is a shared global issue," said Donelan. "Only by working together can we address the technology's risks head on and harness its enormous potential to help us all live easier and healthier lives.

"The work of our two nations in driving forward AI safety will strengthen the foundations we laid at Bletchley Park in November, and I have no doubt that our shared expertise will continue to pave the way for countries tapping into AI's enormous benefits safely and responsibly."

However, Donelan also reiterated the UK's plans not to regulate AI, saying that the technology is developing too quickly.

This is in stark contrast to other states and groups, including the USA's. President Biden issued an executive order last year specifically targeting AI systems, while the EU's AI Act is widely considered to be the world's strictest legislation governing the technology.

Although this agreement is the first of its kind, both countries have also "committed to develop similar partnerships with other countries to promote AI safety across the globe."

What does the industry think?

Tech vendors and other industry stakeholders approved of the announcement, especially what it means for future co-operation.

Ekaterina Almasque, general partner at start-up focused VC firm OpenOcean called the move "a significant stride forward," noting that AI start-ups face a complex landscape impeding their ability to innovate. "This collaboration provides a framework for addressing these challenges, offering guidance and support to help start-ups develop AI technologies responsibly and securely."

Anita Schjøll Abildgaard, CEO and co-founder of Iris.ai, said the UK and USA are "rising to meet one of the defining technological challenges of our era," but noted that they should also be looking to other stakeholders - "particularly those in Europe." She said failing to integrate with other parties risks a fragmented approach to AI safety.

Many commentators welcomed the collaborative and open approach to handling the risks associated with AI, which is set to be a defining topic of the coming years.