Peter Cochrane: AI and the emergent properties of good, bad and evil

Forget Asimov's 'Three Laws of Robotics', robots will almost certainly go off the rails at some point in the future - but they still won't be as bad as human beings

Almost every day, some kind of Armageddon involving robotics and artificial intelligence (AI) appears to be postulated in the media: malevolent technologies just biding their time before striking to wipe humanity from the face of the Earth. Is such an event possible? Yes, but is it likely? No!

In my view we ought to worry more about the actions of our own species. For the religious; Good, Bad and Evil are so easy to, define, describe and explain. But if we dig just little deeper it is apparent that these properties are largely complex, emergent, and generally applicable to many forms of biological/animal life.

In my view we ought to worry more about the actions of our own species

Let me explain by way of two examples. During my career I have been employed by governments; corporations; and by large, medium and small companies and organisations. Within this spectrum I have worked with the good, ambitious, greedy, misguided, difficult, ignorant, arrogant, simple-minded and the innocent. But, never anyone I would consider to be evil.

Example 1: A start up of 10 essentially good, honest, ethical and well behaved people with no record or prior-intent of doing anything bad. Each had invested their own money and time to create a potentially successful business. But at the critical hour, their investors pulled their funding without warning, leaving the 10 exposed at every level: they could be both unemployed and broke with a mountain of unserviceable debt.

So, these ‘good individuals' started to work together to save their ship: corners were cut, laws were imaginatively stretched, risks were taken, and questionable behaviours emerged. In short: the 10 individually remained true to their principles; but the group made bad decisions and did bad things.

Example 2: A giant cooperation with documented codes of practice through all management layers overseen by HR, finance, the board, non-executive directors and the chairman. After a five-year run of profitability with bonuses and rewards, a new technology and market threat appeared without warning that saw corporate turnover and profits decline in less than three years.

Parents will eat their offspring and siblings will kill the weakest in order to survive when food is short

Staff cuts and product line pruning was enacted, consultants were employed and new strategies (followed by inevitable re-organisations) were implemented along with rebranding and market repositioning. However, the situation progressively worsened, and individuals and groups focused on their own survival, rather than corporate survival.

Corporate behaviours soon deteriorated with laws broken and ‘bully-boy' management tactics becoming the norm.

These simple examples can easily be applied to politics, war and many other scenarios involving stress and/or conflict; and they are not unique to humans. Animal societies also engage in similar variants when their environments are stressed. Parents will eat their offspring and siblings will kill the weakest in order to survive when food is short. The nests and eggs of other species will also be plundered and so on.

"We should note that morals and ethics are human constructs that do not apply in the animal kingdom"

"Based on many similar examples; we might surmise that Good, Bad and Evil are emergent properties of intelligent societies"

Axiom: The ‘sum of good' can see an outcome that spans from good to evil

AI-driven robots will undoubtedly go off the rails and do bad stuff from time to time

So, as we build societies of networked AI and robotic entities, with some programmed by humans, while others are self programming/learning, might we expect these ‘good things' to exhibit good behaviours? I think so, but AI-driven robots will undoubtedly go off the rails and do bad stuff from time to time!

When people and/or machine-induced errors occur; when humans program machines as weapons of war; when AI and robotic societies become largely autonomous; and, when their environments are unnaturally stressed, then bad behaviours will almost certainly emerge.

The good news is that we are smart enough to recognise all of this ‘might happen' and we have the ability to mitigate the risks with everything from built-in ‘moral regulators' to the ultimate ‘kill switch'. Sadly, we enjoy no such luxury when it comes to our fellow human beings who remain largely uncontrollable!

ProfessorPeter Cochrane OBE is the former CTO of BT, who now works as a consultant focusing on solving problems and improving the world through the application of technology. He is also a professor at the University of Suffolk's School of Science, Technology and Engineering