UPDATED 20:00 EST / JULY 27 2018

CLOUD

Google is moving compute intelligence to the edge with new offerings

By its very nature, the multitude of devices that make up the internet of things function better on the edge of cloud computing, pushing out analytics and knowledge generation away from the central data center. This allows for much quicker response times and communications, a vital feature in a field always pressing to lower latencies across the board.

At the Google Cloud Next event this week, Google LLC introduced two new products specifically for edge compute. The first is Cloud IoT Edge, a software stack that can run on gateway devices, cameras, or any connected device that has compute capabilities. The second product is Edge TPU, a high-performance chip that can run machine-learning inference on the edge device itself.

“With the combination of Cloud IoT Edge as a software stack and with our Edge TPU, we think we have an integrated machine learning solution on Google Cloud Platform,” said Indranil Chakraborty (pictured), product lead, IoT, at Google Cloud.

Chakraborty spoke with John Furrier (@furrier) and Jeff Frick (@JeffFrick), co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the Google Cloud Next event in San Francisco. In addition to discussing Google’s commitment to supporting IoT, they spoke about IoT connectivity challenges. (* Disclosure below.)

Improving productivity, even when a device isn’t connected 24×7

LG CNS was looking to improve factory productivity. It built a machine-learning model to detect defects on its assembly line using cloud machine-learning engine. The company enlisted one engineer and gave him a couple of weeks to train the model on cloud. Now with Cloud IoT Edge and the Edge TPU, the company can run that trained model locally on the camera itself, so they can do real-time defect analysis on a rapidly moving assembly line.

One of the continuing challenges of IoT is when sensors are located, for example, on windmill farms or in oil wells, where connectivity may be limited and operations aren’t reliable. As long as there’s enough connectivity to download some of the updated model or latest firmware and the software, it’s possible to run local compute and local machine learning inference on the edge itself, Chakraborty explained.

“So you can train in the cloud, push down the updates to the edge device, and you can run local compute and intelligence on the device itself,” he concluded.

Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the Google Cloud Next event. (* Disclosure: Google Cloud sponsored this segment of theCUBE. Neither Google Cloud nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU