

Within 10 years, 99 percent of the data from “internet of things” systems will live and die at the edge of the network, according to Wikibon.com analyst David Floyer. This means that edge device hardware will have to become a lot more hospitable to data analytics for artificial intelligence applications like autonomous vehicles and digital personal assistants. Some say graphics processing units are the answer, but field programmable gate arrays, or FPGAs, may be the more powerful and versatile chips.
“It’s very difficult to actually create customized chips for specific markets,” said Scott Masepohl (pictured), director of the chief technology officer’s office, Programmable Solutions Group, at Intel.
Tailoring chips for a special purpose is quite time-consuming; once the task is complete, the current problem in that market may already have shifted, according to Masepohlid. FPGAs provide a route around this issue since they are fully programmable.
“You can actually create the solution on the fly, and if the solution’s not correct, you can go and you can actually change that,” he said.
Masepohl spoke with John Furrier (@furrier), host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, and guest host Keith Townsend (@CTOAdvisor), principal at The CTO Advisor, during the recent AWS re:Invent conference in Las Vegas, Nevada.
Aside from being customizable, FPGA perform better than GPUs in AI use cases, Masepohl explained. “They have a lot of memory on the inside of the device, and you can actually do the compute and the memory right next to where it needs to be,” he said.
That proximity is very important for keeping latency low in inference making. “And there’s just a phenomenal amount of bandwidth inside of an FPGA today. There’s over 60 terabytes a second of bandwidth in our mid-range Stratix 10 [FPGA and system-on-chip] device. And when you couple that together with the unique math capabilities, you can really build exactly what you want,” he said.
FPGAs can give internet of things edge devices the data capacity they need to make analytics inferences on the spot, according to Masepohl. “You can kind of put them toward the edge so that they can actually process the data so that you don’t have to dump the full stream of data that gets generated down off to some other processing vehicle,” he said. “So you can actually do a ton of the processing and then send limited packets off of that.”
Intel has introduced tools to simplify software development on FPGAs, Masepohl concluded.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of AWS re:Invent. (* Disclosure: Intel Corp. sponsored this segment of theCUBE. Neither Intel nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
THANK YOU