Nations near agreement on use of military AI

Russia notable by its absence

China, the USA and more than 60 other countries all signed a declaration noting the risks of military AI and recognising the need for regulation

Image:
China, the USA and more than 60 other countries all signed a declaration noting the risks of military AI and recognising the need for regulation

An international group of military, academic, and government officials has agreed to issue a joint call to action on the responsible development and use of AI in military operations.

The first global Summit on the Responsible Artificial intelligence in the Military Domain, REAIM 2023, was held on 15th and 16th February in the Netherlands.

The summit gave stakeholders a platform to discuss the key opportunities, obstacles and risks linked to the implementation of AI in military contexts.

The attendees comprised representatives from over 60 nations, including foreign ministers, government officials, think tanks, industry leaders and civil society organisations.

The signatories concluded that the growing use of AI necessitates immediate action to create worldwide standards for AI's use in military operations.

They also recognised the importance of addressing issues such as AI's unreliability; the appropriate level of human responsibility in AI decision-making; unexpected consequences of AI use; and the possibility of risk escalation.

The advancement of digital technology has revolutionised how people go about their daily lives, and it's no surprise that these same tools could enhance military capabilities.

The integration of AI technology into weaponry could potentially enable faster and more efficient decision-making in a combat zone. However, these systems carry significant risks for civilians, since the same technology that protects one group could also be used to target it. There is also the concern that these "intelligent" systems may exhibit biases.

The USA used the conference to submit a declaration proposing the inclusion of "human accountability" in the implementation of military AI technology.

Bonnie Jenkins, US Under Secretary of State for Arms Control, invited all countries to join the US in "implementing international norms, as it pertains to military development and use of AI" and autononous weapons.

The United States and China both signed the declaration, as did more than 60 other countries.

Israel attended the conference but did not sign, while Ukraine was unable to participate, and Russia was not extended an invitation.

During the summit, Jian Tan, China's representative, said nations should reject the pursuit of absolute military dominance and power through AI and instead collaborate through the United Nations.

The summit aimed to form a Global Commission on AI, which will promote awareness of how AI should be applied within the military sector and how the technology can be created and deployed responsibly.

The Stop Killer Robots Campaign, a prominent human rights organisation advocating against the use of AI in military operations, described the call to action as presenting a "vague and flawed" vision of AI's role in the military.

Similarly, the Australian human rights group Safe Ground criticised the entire summit as a "missed opportunity."

In October last year, the White House introduced a set of guidelines designed to promote the responsible deployment of AI technology by companies and safeguard customers from its most significant hazards.

The European Commission has also proposed an AI Liability Directive, in an effort to help individuals harmed by AI and digital devices such as robots, drones, and smart-home systems.