UPDATED 14:47 EDT / DECEMBER 13 2017

INFRA

AI could fly to the IoT edge on time with FPGAs

Lugging all data from “internet of things” connected devices back to the cloud for processing may work in theory or testing but not so much when a developed product goes live. For a product to claim artificial intelligence, it must show its stuff with on-the-spot, instant inferences; there’s no time for trips back to the data center. This means edge hardware has to chip in on compute power.

“We need that compute in the data center, but we have to start pushing it out into the edge,” said Bill Jenkins (pictured), product line manager of AI for field programmable gate arrays, or FPGAs, at Intel. A new class of smarter edge hardware is now needed to compute that data. Sprucing up devices with flexible, programmable hardware like FPGAs can help them be all they can be, he added.

“We want to make those smarter so that we can do more compute to offload the amount of data that needs to be sent back to the data center as much as possible,” Jenkins said.

He spoke with Jeff Frick (@JeffFrick), host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the Supercomputing event in Denver, Colorado. (* Disclosure below.)

FP (future proof) GAs

Much training of AI and machine learning models on big data takes place in the cloud or data centers — and that’s fine. “But now people are building products around it,” Jenkins said. That means that time-to-inference must be super short. In the case of autonomous vehicles, for instance, “where someone’s crossing the road, I’m not waiting two seconds to figure out it’s a person,” he added.

It also changes data scientists’ and developers’ outlooks on hardware at the edge. “They realize that they don’t want to compensate for limitations in hardware; they want to work around them,” Jenkins stated.

FPGAs are one route around those limitations. For instance, once a network is trained, people often go back to retrain and may find accuracy pleasing but performance wanting. “So then they start lowering the precision,” Jenkins said. Not ideal. FPGA’s flexibility allows them to adjust network technicalities without losing as much precision, he added.

And if FPGA users decide to go a different way later on, they can reprogram the chips. “So it gives you that future-proofing, that capability to sustain different typologies, different architectures, different previsions to kind of keep people going with the same piece of hardware without having to say, ‘Spin up a new ASIC [application-specific integrated circuit],'” Jenkins concluded.

Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the Supercomputing 2017 conference. (* Disclosure: TheCUBE is a paid media partner for the Super Computing 2017 conference. Neither Intel, the event sponsor, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU