UPDATED 18:16 EDT / OCTOBER 18 2017

EMERGING TECH

Google’s AlphaGo AI is getting smarter without any human input

DeepMind Technologies Inc.’s AlphaGo is already better at the ancient board game Go than any human player, but the Alphabet Inc.-owned lab has continued improving its artificial intelligence champion. In a new research paper published today, DeepMind showed just how far AlphaGo has come in the last two years: It’s now so smart that it’s learning almost entirely without human input.

DeepMind published the new paper in the journal Nature under the title “Mastering the game of Go without human knowledge,” and it outlines the new AlphaGo Zero. Unlike previous versions of AlphaGo, which learned to play by analyzing data taken from real Go games played by human players, AlphaGo Zero learned the rules of the game entirely on its own by playing randomly generated games against itself.

AlphaGo Zero also differs from its predecessors in a few other ways. For example, the new version uses the white and black Go pieces as inputs through computer vision, rather than relying on custom-engineered inputs. AlphaGo Zero also uses only one neural network rather than two, and it relies on that network to evaluate positions rather than using rapidly generated “rollouts” of possible moves.

It may seem strange to see Alphabet pouring so much research into a board game, but the success of AlphaGo Zero demonstrates the possibility of a future where AI can be trained without massive amounts of data. This would be a major benefit in fields where real data is scarce or difficult to gather, such as in law enforcement or medicine.

Indeed, one persistent critique of machine learning is that it requires so much more data and power to process it than even children require to identify objects, for instance. That’s an indication that deep learning neural networks, the dominant method of machine learning that has led to breakthroughs in image and speech recognition in recent years, is far from actually emulating the human brain.

According to DeepMind, AlphaGo Zero represents a major step forward in building AI that can tackle difficult problems, and the same method of reinforcement learning can be applied to a wide range of use cases.

“Over the course of millions of AlphaGo vs. AlphaGo games, the system progressively learned the game of Go from scratch, accumulating thousands of years of human knowledge during a period of just a few days,” DeepMind Chief Executive Demis Hassabis and Research Scientist David Silver said in a blog post. “AlphaGo Zero also discovered new knowledge, developing unconventional strategies and creative new moves that echoed and surpassed the novel techniques it played in the games against Lee Sedol and Ke Jie.”

Although the researchers said it’s still early, AlphaGo Zero constitutes a “critical step” toward the eventual goal. “If similar techniques can be applied to other structured problems, such as protein folding, reducing energy consumption or searching for revolutionary new materials, the resulting breakthroughs have the potential to positively impact society,” they said.

Photo: DeepMind Technologies

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU