UPDATED 16:30 EDT / MAY 05 2016

NEWS

As AI moves to the chip, mobile devices are about to get much smarter

The branch of artificial intelligence called deep learning has given us new wonders such as self-driving cars and instant language translation on our phones. Now it’s about to injects smarts into every other object imaginable.

That’s because makers of silicon processors from giants such as Intel Corp. and Qualcomm Technologies Inc. as well as a raft of smaller companies are starting to embed deep learning software into their chips, particularly for mobile vision applications. In fairly short order, that’s likely to lead to much smarter phones, drones, robots, cameras, wearables and more.

“Consumers will be genuinely amazed at the capabilities of these devices,” said Cormac Brick, vice president of machine learning for Movidius Ltd., a maker of vision processor chips in San Mateo, Calif.

Movidius was one of several chip companies that presented their designs at this week’s Embedded Vision Summit in Santa Clara, Calif. Movidius, which in January announced a deal with Google Inc. in January to supply a chip (pictured above) that will be used in a yet-to-be-announced mobile device from the search giant, also debuted on Apr. 28 what it calls the first deep learning module on a USB stick. Using 1 watt of power, it’s intended to provide neural network capabilities to mobile devices such as drones, cameras and robots.

Mobile chip leader Qualcomm announced a software developer kit for its Snapdragon Neural Processing Engine, allowing smartphones, drones, and other devices better track objects and recognize sounds. Intel, ARM Holdings, CEVA Inc. and Cadence Design Systems Inc. also promoted their chips’ utility for deep learning. And Google, one of the leaders in deep learning, this week announced that its open-source TensorFlow deep learning software will support lower-power eight-bit processors that are crucial for mobile applications.

Deep learning roughly emulates some of the activity of neurons in the brain, allowing computers to learn to recognize patterns in masses of data. “We have so much data that we need computers to do the understanding, not people,” explained Jeff Dean, Google’s senior fellow in charge of its Google Brain project. Deep learning’s ability to analyze and, even more important, learn from all that data is responsible for recent advances in speech and object recognition and natural language processing.

Jeff Dean, head of the Google Brain project, speaks at the Embedded Vision Summit May 2 in Santa Clara, Calif. (Photo: Robert Hof)

Jeff Dean, head of the Google Brain project, speaks at the Embedded Vision Summit May 2 in Santa Clara, Calif. (Photo: Robert Hof)

Even as deep learning and other AI technologies are increasingly being offered as cloud services by the likes of Google, IBM and Microsoft Corp., the unique needs of mobile devices are pushing AI down to the chip.

In particular, deep learning is getting applied intensively to various computer vision problems that generally depend on local processing of data to be useful in real time. It turns out that for many vision applications, deep learning is proving superior to most other computer vision technologies developed over the last few decades. That’s prompting a wholesale move to the new algorithms.

“We’re in a land rush phase,” said Jeff Bier, founder and president of the embedded-chip consultant Berkeley Design Technology Inc. and founder of the Embedded Vision Alliance. “Chip makers are optimizing the silicon to run these algorithms at the lowest power.”

The low-power needs of mobile devices is one key reason deep learning algorithms are getting embedded on chips. Moving data back and forth between devices and the cloud takes a lot of power, requiring more larger, more powerful and more costly parallel-processing chips such as Nvidia’s graphics processing units (GPUs), said Brick. “Intelligence in the mobile and embedded devices is key for ubiquitous use of vision,” said Raj Talluri, Qualcomm’s senior vice president of engineering for the Internet of Things and mobile computing.

What’s more, some applications, such as self-driving cars (especially collision avoidance and braking), don’t work with the inherent lag time involved with connecting to the cloud, especially on top of the time it takes to move data between processors and memory, a continuing issue. “Latency is probably the next big problem we have to solve,” said Talluri. “Latency makes the difference between useful and useless.”

Finally, security and privacy concerns sometimes weigh toward processing data on the device rather than storing it in the cloud.

As deep learning becomes more ubiquitous, both in the cloud and on the chip, it could allow many more companies to develop new applications, especially ones that make computers act a little more human.

Some of the new applications envisioned include drones that can fly straight and understand the best places to land without needing Global Positioning System capabilities, smart cameras that recognize what people are doing in a scene (such as an elderly person who has fallen) and robots that know rooms well enough to recognize what’s clutter and straighten it out.

Image from Movidius


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU