UPDATED 23:22 EDT / MAY 14 2018

EMERGING TECH

To speed up machine learning, Google’s DeepMind simulates the effect of dopamine on the human brain

Artificial intelligence systems based on deep learning algorithms have shown their ability to outperform real people in all manner of tasks, including classifying images and playing classic board games such as chess and Go.

But despite those impressive achievements, deep learning systems still struggle to compete with humans when it comes to the speed at which they can learn new concepts. For example, machine learning algorithms still require hundreds of hours of training to master simple video games such as Breakout and Pong, something the average human can achieve in just a few hours.

Google Inc.’s deep learning subsidiary DeepMind Technologies Ltd. believes that the secret behind human’s ability to learn new ideas and concepts so quickly might have something to do with dopamine, a neurotransmitter that’s released by neurons in the brain and is believed to be responsible for emotions, movements and sensations of pain and pleasure.

In a paper published Monday in the journal Nature Neuroscience, DeepMind’s researchers talked about the concept of “meta learning,” which is the process of learning from examples and then deriving rules from them over time in order to learn new concepts faster. Scientists believe that meta learning is what allows people to acquire fresh knowledge more easily than computers can, but the process itself is not very well understood.

To improve that understanding, DeepMind’s researchers used something called a recurrent neural network, a special kind of neural network that can internalize past actions and draw on those experiences while it’s learning, to model the physiology in humans.

The researchers wanted to test their theory about dopamine’s role in human learning, so they synthesized this in the form of a reward signal that can mathematically optimize the machine learning algorithm over time through trial-and-error.

DeepMind’s team trained its algorithm using six neuroscientific meta learning experiments and compared its performance to animals that have performed the same tests. One of the tests was based on the Harlow Experiment that involved monkeys choosing between two random objects, one of which was associated with a food reward. The monkeys in the original experiment quickly learned how to select the “correct” object by choosing randomly first of all, and then always selecting the object associated with the reward thereafter.

monkey-2790482_960_720

DeepMind’s algorithm was easily able to match the performance of monkeys in the so-called “Harlow Experiment.” Image: Christels/Pixabay

Thanks to its artificially created dopamine, DeepMind’s algorithm could complete the tests to the same degree of competency as the original animal subjects by making “reward-associated” choices from images it had never previously seen.

The results reinforce the view that dopamine plays a key role in human learning, the researchers said. They explained that in animals, dopamine is believed to strengthen the synaptic links in the prefrontal cortex of the brain as a way of reinforcing learned behaviors. However, the neural networks’ behavior also indicates that dopamine helps humans to convey and remember information about tasks and rules, the researchers said.

“Neuroscientists have long observed similar patterns of neural activations in the prefrontal cortex, which is quick to adapt and flexible, but have struggled to find an adequate explanation for why that’s the case,” DeepMind’s team said in a blog post. “The idea that the prefrontal cortex isn’t relying on slow synaptic weight changes to learn rule structures, but is using abstract model-based information directly encoded in dopamine, offers a more satisfactory reason for its versatility.”

The researchers said their experiments suggest that AI can benefit from neuroscience, just as neuroscience already benefits from AI.

“Leveraging insights from AI which can be applied to explain findings in neuroscience and psychology highlights the value each field can offer the other,” they wrote. “Going forward, we anticipate that much benefit can be gained in the reverse direction, by taking guidance from specific organization of brain circuits in designing new models for learning in reinforcement learning agents.”

Image: 95C/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU