UPDATED 18:06 EST / SEPTEMBER 23 2016

NEWS

Deep learning researchers train AI killing machines with ‘Doom’

We all worry that the robot apocalypse is basically inevitable. Now it looks like researchers at Carnegie Mellon University are looking to speed it along.

The researchers are teaching artificial intelligences how to become more efficient killing machines. How? With violent video games, of course.

Guillaume Lample and Devendra Singh Chaplot, graduate students at Carnegie Mellon’s School of Computer Science, recently published a paper titled “Playing FPS Games with Deep Reinforcement Learning,” which explains how they trained artificial intelligence to play the classic first-person shooter game, Doom.

Because video games offer a controlled environment, they have become a popular testing ground for artificial intelligence, and nearly every major player in the AI space has turned to gaming in their research. For example, Microsoft Corp has created an entire open-source AI testing platform based in Minecraft, and Google famously put Deep Mind’s machine learning capabilities to work in dozens of classic Atari games.

According to Lample and Chaplot, when AI are trained using these sorts of simulations, the AI agents often have access to too much information that they would not have in a real-world scenario. The researchers explained that they chose Doom as the medium for their experiments because its 3D environments allowed them to test how well an AI can deal with “partially observable states,” meaning that the AI has access to some but not all of the information available in an environment.

“Advances in deep reinforcement learning have allowed autonomous agents to perform well on Atari games, often outperforming humans, using only raw pixels to make their decisions,” Lample and Chaplot explained in their paper. “However, most of these games take place in 2D environments that are fully observable to the agent.”

Dealing with messy problems

With Doom, Lample and Chaplot used machine learning to train AI that could react to constantly changing situations in real-time, both against computer-controlled opponents and against real human players. In these tests, the AI had to learn how to navigate Doom’s 3D environment while hunting down useful items, eliminating enemies and keeping itself alive.  According to the researchers, the AI managed to outperform humans in competitive multiplayer, even on maps that it had never played before.

The goal behind Lample’s and Chaplot’s experiment is not to make AI better at killing people (we hope), but rather to demonstrate how AI can learn to deal with messy, real-life issues where important information might be missing.

For many years, AI researchers have favored games like chess and Go because they are perfect information games, meaning that both players know everything that is happening and has happened in a match, completely eliminating any element of chance.

Unfortunately, even with the continuing Big Data revolution, real-world problems are rarely so clean-cut. AI will need to become better at what are essentially judgment calls when dealing with truly complex problems.

You can watch a video of Lample’s and Chaplot’s Doom-playing AI below:

Screenshot via Devendra Chaplot | YouTube

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU