We must ensure AI doesn't overpower us, argues Oxford University professor

Despite the risks, AI is 'ultimately the key portal that we have to pass through to realise the full dimensions of humanity's long-term potential,' argues Professor Nick Bostrom

If experts can successfully develop artificial intelligence (AI) with the same reasoning capability as people, it will be a "fundamental game-changer" representing the same level of human evolution as the initial emergence of Homo sapiens as a separate species to great apes - but also risks humanity losing control of its future to uncontrolled AI.

That's according to Professor Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, a multidisciplinary research body dedicated to "looking at big-picture questions for human civilization".

An expert and researcher in a variety of disciplines including physics, computational neuroscience, mathematical logic and philosophy, Professor Bostrom made the remarks while giving a presentation on the socio-economic impacts of machine learning and AI at a recent event hosted by The Royal Society.

Professor Bostrom described himself as "really excited" about the short-term benefits from advances in machine learning, but explained how he'd be disappointed if artificial intelligence didn't get much beyond performing basic tasks - such as document processing.

"It will be tragic if machine intelligence is never developed to its full potential, if we stopped before going all the way. This is ultimately the key portal that we have to pass through to realise the full dimensions of humanity's long term potential," he said.

Citing a survey of some of the greatest minds on AI, Professor Bostrom claimed that intelligent machines could be a regular and accepted part of human life by as early as 2050.

"It's not a ridiculous prospect to take seriously the possibility that this could happen in the lifetime of many people alive today," he said, although he warned that the scale at which AI is developing could mean that there's not enough time to properly plan for it because "it's really not that long".

Bostrom compared the situation to global warming, which has been known about for some time, but one which humanity has failed to seriously act upon.

"The basic mechanics of global warming have been known for 100 years, it's a very simple mechanism and we're still kind of working our way through that as a civilization to take it into account," he said.

Professor Bostrom also argued that the development of true artificial intelligence would mean far more to the world than technological developments that get people excited today - such as iPhones, tablets and wearable devices - because the advent of AI would fundamentally change the course of humanity.

"The transition to machine intelligence is of monumental significance. This isn't just one more cool technology, another source of nifty gadgets or something we'd like to create new jobs from to boost the economy. This is a fundamental game-changer, a technology unlike any other," he said.

"If we want to reach back in history for an analogy, I think we have to go back perhaps to the initial emergence of Homo sapiens from great apes, it's this factor magnitude."

[Please turn to page 2]

We must ensure AI doesn't overpower us, argues Oxford University professor

Despite the risks, AI is 'ultimately the key portal that we have to pass through to realise the full dimensions of humanity's long-term potential,' argues Professor Nick Bostrom

Professor Bostrom claimed that such a phenomenal development would represent "the last invention that we'd need to make".

"If you really think about what it'd mean to have machine intelligence with the same general intelligence learning ability as humans have, the same flexible general purpose, machines that could do science as we can, you soon realise it has applications across all the different domains humans are active in."

Speaking earlier in the day at the Royal Society event, Dr Demis Hassabis, co-founder and CEO of DeepMind, the neuroscience inspired AI company acquired by Google in January 2014, suggested that AI could provide great benefit to scientists attempting to solve long, complex problems.

"We might have to come to the sobering realisation that even with the smartest set of humans on the planet working on these problems, these systems may be so complex, that it's difficult for individual humans and scientific experts to have the time they need in their lifetimes to even innovate and advance," he said.

"It's my belief we're going to need some assistance and I think AI is the solution to that," Dr Hassabis added.

However, Professor Bostrom believes AI can go one step further by outright replacing researchers and suggested that in the future, a place like the Royal Society could become a "server farm".

"If you think it through, if you could produce all of this intellectual output, that we now rely on the human biological brain to produce, cheaper and more efficiently in a machine that costs less than humans, then really it's a fundamental game changer in the future of humanity," he said, adding it "could fundamentally change the human condition".

Professor Bostrom argued that the social, economic and political ramifications of this must be thought about today.

"We might very well see slow but very incremental progress that doesn't really raise any alarm bells until we're at the point where we're just one step away from something that is radically intelligent," he said, claiming that "the prospect of super intelligence raises unique challenges," especially if AI becomes so intelligent it can plan its own strategies.

"There will be a new kind of dynamics when there's an AI which is able to strategise, able to plan, that is able to conceal strategies for achieving its goals. You're not dealing with a tool, but with an agent and that's a very different kind of system," said Professor Bostrom.

[Please turn to page 3]

We must ensure AI doesn't overpower us, argues Oxford University professor

Despite the risks, AI is 'ultimately the key portal that we have to pass through to realise the full dimensions of humanity's long-term potential,' argues Professor Nick Bostrom

That intelligence, Professor Bostrom argued, could lead to intelligent machines becoming the most powerful beings on Earth for exactly the same reason that humans have become the most dominant species on the planet; because our intelligence enabled us to put ourselves in that place, despite other disadvantages.

"There are scenarios where super-intelligent systems become very powerful, basically for the same reasons that humans are very powerful. It's not because we have stronger muscles or sharper teeth, it's our brains which are slightly cleverer than chimpanzees and the other great apes," Professor Bostrom explained.

"It's those small increases in our ingenuity which have allowed us to develop all these other technologies that now place us in this dominant position on the planet. Such is the future of the gorillas depends more on what we do than what they do," he continued.

"If there were machines that exceeded our intelligence in the same way that our intelligence exceeds other animals, then it's possible that those machines could be very powerful, perhaps able to shape the future to their preferences," he warned.

That, Professor Bostrom argued, leads to "the problem of control". The issue of ensuring that when human-level artificial intelligence is created, it needs to be developed in such a way that it has "human values".

"Supposing we'd figured out how to produce machines with general intelligence and ensured that those intelligent machines would be safe, they'd need to achieve a particular outcome and be aligned with human values," he said. "That's potentially a very hard problem."

"There are a lot of superficial ideas that people come up with after five minutes of thinking about it that a lot of people convince themselves will solve the problem that turn out not to work," said Professor Bostrom.

"One of the key problems that's occurred in this field is a deeper understanding of the nature of this control problem and a different appreciation of just how fundamentally challenging it is to engineer an outcome which can be relied upon to act in the interests of humans even as it attains higher realms of intelligence," he said, before suggesting that the solution is for professional bodies to examine the implications sooner rather than later.

"We should establish a field of inquiry to perform foundational and technical work on the control problem. There's some parts of control problems we won't be able to solve in advance, you can only do them if you have the particular architecture with all the details of how machine intelligence will be achieved," said Professor Bostrom.

"But the more of the control problem we can work out in advance, the greater the chances that we will get our act together by the time we need it," he concluded.