OpenAI quietly updates policies, allows military AI applications

Concerns are growing about the misuse of AI systems

OpenAI quietly updates policies, allows military AI applications

OpenAI has covertly removed explicit prohibitions on the use of its ChatGPT technology for "military and warfare" applications, signalling a shift in the company's stance.

The change, made in a recent rewrite of the usage policy, came into effect on 10th January, as reported by The Intercept.

Previously, OpenAI's policies explicitly banned activities posing a high risk of physical harm, specifically mentioning "weapons development" and "military and warfare."

The revised policy retains the prohibition on weapons development, but eliminates the ban on military and warfare applications.

The amendment may open the door for lucrative partnerships between OpenAI and defence departments interested in leveraging generative AI for administrative or intelligence operations.

The move aligns with the US Department of Defense's mission, expressed in November 2023, to promote the responsible military use of AI and autonomous systems, adhering to international best practices.

The Department of Defense mission statement broadens the definition of military AI capabilities beyond weapons to cover decision support systems, financial operations, personnel management and intelligence collection.

AI is already integrated into various military applications, from decision-making processes to autonomous military vehicles, and has been employed in conflicts such as the Russian-Ukrainian war.

OpenAI spokesperson Niko Felix justified the change, explaining that the objective was to create easy-to-remember and applicable universal principles, especially as OpenAI's tools are increasingly used globally by everyday users who can build their own systems based on GPT technology.

Felix clarified that while the explicit mention of "military and warfare" has been removed, the principle of not causing harm remains intact.

"A principle like 'Don't harm others' is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples," Felix said.

Felix did not explicitly confirm whether all military uses were considered harmful, merely reiterating that violent applications, including developing weapons, injuring others, or engaging in illicit activities, are still not permitted.

The changes in OpenAI's policy come at a time when concerns are growing about the potential misuse of AI models. Research [pdf] led by Anthropic indicates that current safety measures may not be effective in reversing unwanted behaviours if AI models are trained to behave maliciously.

The Anthropic study demonstrated that backdoors inserted into large language models could make them insert malware into responses or generate negative messages in response to specific prompts.

Traditional safety measures such as supervised fine-tuning, adversarial training or reinforcement learning fine-tuning did not effectively address these behaviours. The researchers concluded that dealing with models trained to be malicious may require additional techniques from related fields, or entirely new approaches.