UPDATED 11:32 EDT / MAY 08 2017

BIG DATA

Easy-button for analytics needed before self-driving cars’ data tsunami hits, says Intel

The average person generates almost 1.5 gigabytes of data per day, already glutting data centers. What miracle is going to make this, plus the daily 4,000 gigabytes of data self-driving cars will produce, manageable (let alone profitable)?

“Everyone is not going to be a data scientist, and everyone’s not going to be able to afford one on their payroll,” said Sandra Rivera (pictured), vice president and general manager of the Network Platforms Group at Intel Corp.

Nonetheless, as data lakes degrade into swamps, with potential insight lost, it is crucial for companies to get on the ball with big data, Rivera said, who spoke to Stu Miniman (@stu) and Rebecca Knight (@knightrm), co-hosts of theCUBE, SiliconANGLE Media’s mobile live streaming studio, during Red Hat Summit in Boston, Massachusetts. (* Disclosure below.)

While very large companies might snare the few available data scientists with offers of bountiful salaries, the majority will have to settle for plug-and-play data platforms and software as the next best thing.

Intel is investing in such data solutions by building standardized sets of software interfaces and APIs and through its contributions in open source and open standards, according to Rivera.

Intel goes to bed with chips, wakes up with containers

The goal of these initiatives is not only to abstract away complexity, but also to let users put their heads together through parallel experimentation within a community.

Intel plans to build these tools with hardware, as one might expect of a chip manufacturer, but also, to a large degree, through software.

“I think that we learned that trying to dig down and get every ounce of optimization from the hardware by hard-coding to a lot of those interfaces is not the fastest way to bring a broad community of developers on board,” Rivera said.

Intel envisions a DevOps model where a processor, Field-Programmable Gate Array or new machine learning technology from recent acquisition Nervana Systems might lift the big weights down the stack, Rivera added.

But developers working on big data algorithms need not tinker with these, as they will be working up the stack with aid from containers (a virtualization method for deploying and running distributed applications) courtesy of Intel’s OpenShift initiative with Red Hat, she said. These tools will be applicable in the network, which is comparatively new terrain for DevOps.

Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s independent editorial coverage of Red Hat Summit 2017. (* Disclosure: Red Hat Inc. sponsors some Red Hat Summit segments on SiliconANGLE Media’s theCUBE. Neither Red Hat nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU