UPDATED 12:39 EDT / FEBRUARY 09 2017

EMERGING TECH

DeepMind’s AIs can fight or cooperate – just like people

As artificial intelligence becomes more commonly used for everything from catching cancer early to describing images to the blind, what happens when two or more agents have overlapping goals? Will they battle for dominance or help one another out?

According to the Alphabet Inc.-owned DeepMind Technologies Ltd., the answer is both. DeepMind pitted two self-interested AI agents against one another in a simple 2D gathering game that involved collecting apples. Each of the agents had a special ability where one could fire a beam at the other AI to temporarily disable it, but neither AI received any direct reward for this action.

“Rather naturally, when there are enough apples in the environment, the agents learn to peacefully coexist and collect as many apples as they can,” the DeepMind team explained in a blog post. “However, as the number of apples is reduced, the agents learn that it may be better for them to tag the other agent to give themselves time on their own to collect the scarce apples.”

Interestingly, DeepMind also noted that agents that can employ more complex strategies were more likely to use their beam ability regardless of the number of apples available. In other words, a smarter AI was more likely to be aggressive rather than cooperative.

Image courtesy of DeepMind Technologies Inc.

This might sound like bad news for the future of humanity, but fortunately for our long-term survival, DeepMind discovered that the more complex AI could also be more cooperative in a different environment. In another experiment, DeepMind tested its agents in a game called Wolfpack. In this game, the agents played as wolves who had to work together to navigate a 2D environment while pursuing their prey.

If the AI wolves were close together, they would both receive credit for capturing the prey regardless of which agent actually got to it first. This meant that the agents would be more successful if they worked together to surround and trap the prey. Unlike the gathering game, complex AI agents were more likely to cooperate in Wolfpack than less complicated agents.

“So, depending on the situation, having a greater capacity to implement complex strategies may yield either more or less cooperation,” the DeepMind team explained.

DeepMind’s experiments are incredibly simplified compared with the real-world problems that AI agents are tackling every day, but the research team said that their findings show that AI can simulate how new policies could affect cooperation.

“As a consequence, we may be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet – all of which depend on our continued cooperation,” the DeepMind team concluded.

For more in-depth information, you can read DeepMind’s full research paper about the experiments. You can also watch videos of DeepMind’s gathering and wolfpack games below:

Images courtesy of DeepMind

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU