Peter Cochrane: Concatenated good and mutated evil

Peter Cochrane: Concatenated good and mutated evil. Source Pixabay

Image:
Peter Cochrane: Concatenated good and mutated evil. Source Pixabay

Emergent behaviour means that AI can act unpredictably, but only AI can see us through our current problems

During my professional life I have worked with many government, institutional and corporate employees, as well as a broad swathe of ranks throughout the armed and public services across many countries.

I don't recall meeting anyone decidedly bad or evil, quite the reverse! All have appeared honest, patriotic, and focused on doing the "right thing", but their organisations have, from time to time, done bad things, and occasionally some have even been outright evil.

This appears counterintuitive, but it is clearly evidenced by the behaviours within some families, communities and populations. Such emergent behaviours have occurred in every society and nation for millennia. In raw systems terms you can accumulate good components and elements, connect them to realise a complex system, task them carefully and precisely, but the output might occasionally turn out to be a bad and unwelcome surprise!

This even happens with the tightest of constrains (rules, regulations processes and laws) binding the individual human, and/or machine, and indeed the integrated whole.

It is not clear why this is the case, and it doesn't easily compute -I think we can safely assume that it is not fully understood!

These unwanted outcomes defy our ability, and that of our machines, to model and predict, and it may require quantum computers to give us a fuller picture. So this begs the question; if we build a society supported by robots and artificial intelligences, each constrained by operational rules, will the outcome turn out to be bad?

Right now, we don't know, but we might hazard a guess that the omission of human traits such as personal gain, greed, envy, jealousy, revenge, etc might see some damping of any emergent evil engendered by our hand.

However, it does not preclude the inventiveness and creativity of the machines, and to err on the safe side we should assume that Asimov's Laws of robotics (including all the extensions and variants) will not be enough.

We might thus posit that a new set of laws are required in order to prevent, or at least limit, the rise of evil in the machine world. Conflicting rules within the concatenation of many networked and other systems are probably inevitable, and likely to be compounded by hidden and unpredictable paradox situations. Therefore, stumbling onto a path leading to unwanted or evil outcomes would seem to be inevitable.

So, we need some "outcome" wrapper, some checks and balances to ensure that the whole remains good and to the benefit of humanity and the ecology.

Given our already established and total reliance on AI today, plus the huge benefits yet to be realised, along with its vital role in creating sustainable societies, safeguarding ecosystems and the future of all life on Earth, there appears to be but one key question: Can AI be an even greater threat than humans?

The fearmongers would no doubt say yes and call for a stop to all AI R&D. But it is not so simple, and such action would see yet another catastrophe pending in terms of food supplies and the exploitation of depleted stocks of raw materials. Industry 4.0/Society 5.0 need AI at their core. They require science, technology, engineering and societal research focused on realising more with less material and energy whilst stabilising the human population. This is all beyond raw human capabilities and cannot be realised without AI.

Today we have Asimov's Laws of the inner core (individual entities), but we now need something more powerful to apply at the outer limits of individual networked, and interacting systems. And we need the power of interdiction should the need arise.

Peter Cochrane OBE, DSc, University of Hertfordshire