AI could help solve humanity's biggest issues by taking over from scientists, says DeepMind CEO
Dr Demis Hassabis tells The Royal Society how AI could take over when leading scientists can't finish research in their lifetimes
Artificial intelligence could help humanity in discovering advancements and benefits that may not otherwise be possible, by assisting with the conversion of unstructured data into meaningful insights. But this must be done in an ethical fashion.
That's what Dr Demis Hassabis, Co-founder and CEO of DeepMind, the neuroscience inspired AI company acquired by Google in January 2014, told the audience at a Royal Society event on the future of machine learning.
Dr Hassabis described how some of the most complex "big science" problems facing humanity are ultimately big data challenges
"Society would like to make much more progress on the issues facing us today; climate, disease, macro-economics, even things like large-scale particle physics. All these systems are actually big information and data challenges at their core," he explained.
Due to the highly complex nature of some of these issues, combined with the fact that humans have a limited lifespan, Dr Hassabis suggested that artificial intelligence could help take the reins when it comes to solving some of the world's greatest problems.
"We might have to come to the sobering realisation that even with the smartest set of humans on the planet working on these problems, these systems may be so complex, that it's difficult for individual humans and scientific experts to have the time they need in their lifetimes to even innovate and advance," he said, adding: "It's my belief we're going to need some assistance and I think AI is the solution to that."
It's for that reason, Dr Hassabis told the audience, he and Google DeepMind are committed to trying to understand and build true artificial intelligence.
"The reason I work on this, rather than working on any of these specific scientific problems, is I see AG [artificial general intelligence] as a solution to all of these problems," he said.
Ultimately, Dr Hassabis explained, the goal of DeepMind goes hand in hand with the general goal of Google to "empower people through knowledge".
"Google's mission statement is to organise information and make it accessible and useful. And the way I interpret that is that we're trying to empower people through knowledge, and if you think about AI, it perfectly fits with this mission, because AI is really a process that converts unstructured data or information into useful, actionable knowledge," he said.
However, Dr Hassabis warned that the ethical implications must not be ignored.
"As with all new powerful technologies, which something like AI could be one of the most powerful, we must use it responsibly and ethically," he said.
"We're investigating this and there's a lot of research to be done here. But I would say that the technology itself is neutral, it's down to how we as society use it to determine whether it's beneficial or not," Dr Hassabis concluded.
Geoffrey Hinton, professor of computer science at the University of Toronto and a machine learning and artificial intelligence expert for Google, also spoke at The Royal Society event.
He argued that machine learning is going to be very important for the future of document processing to such an extent that eventually computers and search algorithms could be able to "think like people."