Google AI crushes Go grandmaster and beats Facebook as well

DeepMind's AlphaGo uses deep learning neural networks to beat a human opponent

Google's artificial intelligence (AI) division has achieved a landmark victory over the grandmaster of Chinese board game Go.

The AlphaGo system beat three-time European Go champion Fan Hui 5:0 in a competition held at the company's London headquarters in October.

AlphaGo was built by Google's AI division created from the $400m acquisition of London-based AI company DeepMind in 2014.

The victory is notable because the complexity of Go makes it difficult for computer systems to calculate all the possible moves. Chess, for example, offers 400 possible moves after the first two, but Go offers 130,000, making it very difficult for an AI to perform those calculations to match a human opponent.

Traditional AIs used in Go construct search trees to calculate all the possible positions. But to beat a human opponent, AlphaGo used two deep learning neural networks and a technique called advanced tree search, which in essence plays out many games branching from the outcome of a move and then uses a related method called backpropagation to select the best move.

The two neural networks also work in parallel, explained Demis Hassabis, AI researcher at DeepMind, in a blog post.

"AlphaGo combines an advanced tree search with deep neural networks. These networks take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections," he said.

"One neural network, the 'policy network', selects the next move to play. The other neural network, the 'value network', predicts the winner of the game."

Hassabis added that the neural networks were trained from 30 million moves made in games played by human experts until AlphaGo could predict a human move 57 percent of the time, beating its previous record of 44 percent.

But to go beyond just mimicking human players, AlphaGo discovered new strategies by playing games between its two neural networks, learning by itself through a process of trial and error known as reinforcement learning, which allows it to learn about good moves without any pre-programing.

The processing power needed to carry out such techniques involved using compute power from the Google Cloud Platform. The end result was a human-beating AI. But, while AlphaGo was designed to tackle Go, it did not rely on ‘hand crafted' rules, and instead used general machine learning techniques.

This means that Google's AI technology could be applied to other games, tasks and industries, for example aiding climate modelling or complex disease analysis.

Facebook face-off

Google's announcement was doubly notable as it came just a few hours after at the Facebook's Mark Zuckerberg announced the progression of his firm's AI research for a system that can play Go using rapit pattern matching techniques.

"Scientists have been trying to teach computers to win at Go for 20 years. We're getting close, and in the past six months we've built an AI that can make moves in as fast as 0.1 seconds and still be as good as previous systems that took years to build," he wrote in a post on Facebook.

Google's announcement rather undermined Zuckerberg's posturing.

Nevertheless, while Google may have pipped Facebook to the post, the research being conducted by both companies indicates that AI is advancing faster than some may have predicted.

This may worry other technology luminaries who fear that AI will eventually enslave humanity.

But there are still myriad challenges to overcome before we see AIs capable of outsmarting humans on a regular basis, particularly as there are at least five schools of thought on the best way AIs should be trained to learn.

That being said, AIs are already among us and are not too removed from the sentient computers seen in science fiction films.