Government trying to formalise AI risks before November summit

Downing Street officials are on a world tour seeking formal agreement on an AI statement

Government trying to formalise AI risks before November summit

Government officials are in discussion with world leaders as they try to agree on a formal statement about the risks of artificial intelligence, before next month's AI Safety Summit.

Rishi Sunak's advisers have travelled to China, the EU and USA as they try to draw up a final version of a communique to be shared at the summit on 1st and 2nd November.

They are also trying to push the idea of the UK as a leader on AI safety, according to The Guardian, promoting its domestic AI taskforce for a global role.

A draft agenda The Guardian has seen refers to establishing an "AI Safety Institute" to ensure national security agencies can properly evaluate frontier AI models - although the government tried to downplay that last week.

Matt Clifford, the PM's representative for the AI Safety Summit, tweeted on 3rd October that although the event is focused on international collaboration, it is not about setting up a new international institution dedicated to the risks of cutting-edge AI.

The draft agenda also says the Summit will provide an update on safety guidelines originally published by the White House; examine international cooperation on AI that could threaten human life; and end with "like-minded" countries debating how national security agencies can scrutinise AI.

Clifford said the conference will be "very small [and] very focused," limited to only about 100 attendees.

They will include "cabinet ministers from around the world, CEOs of companies building AI at the frontier, academics, and representatives of international civil society."

Companies taking part are expected to include OpenAI, Google and Microsoft. After the Summit they will publish details about how they are adhering to the AI safety commitments agreed with the Biden administration in July.

Those commitments include testing AI models before releasing them to the public, and ongoing scrutiny of systems when they begin operating.

Oseloka Obiora, CTO at RiverSafe, said:

"Establishing an AI Safety Institute will play a key role in tackling the risk posed by AI regarding the cyber threat, allowing frontier AI models to be scrutinised. This will support businesses as they consider the implementation of AI and help them to ensure robust cybersecurity measures are in place to protect themselves from risk."