UPDATED 17:25 EST / JANUARY 27 2016

NEWS

Google’s AI beat a professional Go player, and it’s kind of a big deal

AlphaGo, an artificial intelligence program developed by Google’s DeepMind team, has become the first AI to successfully beat a professional Go player, a first for the field of AI design.

The actual rules of Go are relatively simple, yet designing an AI to actually play the game at the same level as the top players has proven extremely difficult, primarily due to the insane number of choices possible in Go.

“As simple as the rules are, Go is a game of profound complexity,” Demis Hassabis, CEO and co-founder of DeepMind, said on Google’s blog. “There are [1 * 10171] possible positions—that’s more than the number of atoms in the universe, and more than a googol times larger than chess.”

He added, “This complexity is what makes Go hard for computers to play, and therefore an irresistible challenge to artificial intelligence (AI) researchers, who use games as a testing ground to invent smart, flexible algorithms that can tackle problems, sometimes in ways similar to humans.”

AlphaGo played (and mostly won) hundreds of games against other Go-playing AIs, and the program was finally tested against Fan Hui, the reigning three-time European Go champion. AlphaGo won all of the five games it played against its human opponent, which is the first time a computer has ever beaten a Go champion.

The next test for the AI will pit it against Lee Sedol, whom Hassabis called “the top Go player in the world over the past decade.”

Why it matters

Teaching a computer to play Go may not seem like a big deal, but it is a major breakthrough for the field of artificial intelligence. Perhaps the most important takeaway is that AlphaGo was not told how to play Go—it actually taught itself.

“We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time (the previous record before AlphaGo was 44 percent),” Hassabis explained.

“But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning.”

Hassabis noted that all of that trail-and-error learning required an immense amount of computing power, and in AlphaGo’s case, the AI took advantage of Google Cloud Platform.

“While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real-world problems,” Hassabis concluded. “Because the methods we’ve used are general-purpose, our hope is that one day they could be extended to help us address some of society’s toughest and most pressing problems, from climate modelling to complex disease analysis. We’re excited to see what we can use this technology to tackle next!”

Google published a full report on the methodology behind AlphaGo in the scientific journal Nature.

Photo by chadmiller

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU