UPDATED 16:08 EDT / JULY 10 2017

EMERGING TECH

Google DeepMind is teaching an AI how to walk

DeepMind Technologies Inc. has built artificial intelligences that can defeat the world’s top Go players or beat dozens of old video games in no time at all. Now, the Google Inc.-owned company is setting its sights on something a bit more complicated: teaching an AI to walk.

The idea behind DeepMind’s project is not necessarily to create a walking AI, but rather to use simulated movement as a test for new machine learning techniques. According to a blog post by the DeepMind research team, the end goal is to produce “flexible and natural behaviours that can be reused and adapted to solve tasks.”

Although games like Go have well-defined goals that are relatively simple for an AI to figure out, the DeepMind team explained that teaching an AI how to perform a physical action like jumping or doing a backflip is a lot more complicated. “The difficulty of accurately describing a complex behaviour is a common problem when teaching motor skills to an artificial system,” the team said.

DeepMind published three research papers today on its methods for teaching movement to an AI, the first of which outlines how it trained an AI to overcome obstacles by giving it a simple goal like “move forward.” Some of the AI agents they trained were bipedal like humans, but they also trained AI agents modeled as simplified four-legged animals.

“Specifically, we trained agents with a variety of simulated bodies to make progress across diverse terrains, which require jumping, turning and crouching,” the DeepMind team said. “The results show our agents develop these complex skills without receiving specific instructions, an approach that can be applied to train our systems for multiple, distinct simulated bodies.”

In another paper, DeepMind demonstrates an AI that learned human movements by looking at motion capture data. The AI was able to achieve movement that “looks human-like,” and it was learned how to accomplish several different types of motion, such as getting up from a fall or walking up stairs.

In its final paper, DeepMind also showed an AI that could look at different movements and predict how a body might have transitioned between the two. For example, the AI could look at a person standing and a person bent over and predict how they went from one pose to another (pictured).

Each AI agent that DeepMind demonstrated could simulate only relatively simple movements, but they also show how far researchers have come in pushing AI beyond the clean and simple world of data and into the messiness of the real world. Some of the techniques DeepMind describes in its papers could eventually be used to build a more complicated AI agents without having to tweak them manually.

The DeepMind team said that further work on these methods could “enable coordination of a greater range of behaviours in more complex situations.” In other words, its AI has to learn to walk before it can learn to run.

Photo: DeepMind

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU