Autonomous weapons could 'accidentally' start the next world war, warns ex-Google engineer

Laura Nolan, who resigned from Google last year over military drone project, warns over 'killer robots'

A former Google software engineer who resigned from the company last year has warned that autonomous weapons could ‘accidentally' start the next world war.

Laura Nolan resigned form the company last year after being assigned to a US military drone project. Over the weekend, Nolan warned that incorporating increasingly sophisticated artificial intelligence into military technology could have dire adverse consequences.

Nolan subsequently joined the Campaign to Stop Killer Robots and has briefed diplomats on the subject. Her latest concern is that AI could end up starting wars or committing major atrocities.

"I am not saying that missile-guided systems or anti-missile defence systems should be banned. They are after all under full human control and someone is ultimately accountable," Nolan told The Guardian.

She continued: "These autonomous weapons, however, are an ethical as well as a technological step change in warfare. Very few people are talking about this, but if we are not careful one or more of these weapons, these killer robots, could accidentally start a flash war, destroy a nuclear power station and cause mass atrocities."

One of the big problems, Nolan added, is that such systems can only really be tested in combat zones, which raises further issues. "How do you train a system that runs solely on software how to detect subtle human behaviour or discern the difference between hunters and insurgents?" she asked.

She added: "How does the killing machine out there on its own flying about distinguish between the 18-year-old combatant and the 18-year-old who is hunting for rabbits?"

Nolan left Google over her work on Project Maven, a part of Google that works on improving drone video recognition technology for the US Department of Defense. Google eventually let the contract lapse back in March after more than 3,000 Googlers signed a petition condemning the company's involvement in military projects.

At around the same time, Google set-up an ethical panel of eight supposed ‘digital ethicists' to advise it on AI issues. A month later, however, a different ethics panel was quietly disbanded.