Google CEO Sundar Pichai lays down company policy on AI collaboration

Google's new policy on collaborating with government on AI riddled with 'ifs' and 'buts'.

Google has sought to clarify its company policy on cooperating on the development of weaponry with artificial intelligence - but filled it with caveats that will probably end up pleasing no-one.

The policy was drawn up following vocal staff complaints about the company's cooperation with the Pentagon on a number of projects.

In a blog post, CEO Sundar Pichai said that Google will still work with governments in areas like training and cybersecurity, but work in other areas might be limited.

The company won't, for example, work on surveillance that falls outside "internationally accepted norms" or anything dangerous unless "the benefits substantially outweigh the risks", he claimed.

Furthermore, Google's drone-footage surveillance program, which ignited the staff revolt in April, won't be renewed next year.

Google employees said that they were not interested in being part of the "business of war".

But Pitchai's promises are open to interpretation. No more than defining exactly what the company's ditched "don't be evil" motto meant, it seems that the tech world is satisfied with principles, not always what they mean in context.

And yet, Pichai's document has been described as showing "concrete standards". So concrete in fact that, as pointed out by CNBC, they still allow plenty of room for Google Cloud, for example, to chase after defence contracts.

Google argues that it needs to have the leeway for the unexpected, like Tensorflow, which being open source, could be adapted in ways that fall well outside its remit. Except that includes the current Pentagon contract which uses... Tensorflow.

But Pichai says claims that the company will continue to work to "limit potentially harmful or abusive applications" of AI - which could be interpreted pretty much in any way.

Broadly speaking, the rules laid out state that anything that could cause harm will, as stated, be limited to where benefit outweighs risk, and even then appropriate safety restraints will be deployed.

However, it is clear that the company won't work directly on weapons or other technologies that are designed to cause injury to people.

As mentioned above, Google's policy means that it won't work on surveillance outside international norms, but the definition of "international norms" is open to interpretation, and depends on who's doing the interpreting.

And it won't work on anything that contravenes "widely accepted principles" of human rights, but it carries the disclaimer that "we will continue our work with governments in the military in other areas".

In addition, there's little to prevent research in one apparently beneficial area being weaponised, either behind Google's back or with its tacit approval.