UPDATED 23:30 EDT / SEPTEMBER 26 2016

NEWS

Could the GPU be the sleeper hit of the new cognitive computing world? | #BigDataNYC

It’s no secret that Big Data is putting heavy strain on traditional infrastructure and processing systems. And symbiotic technologies, like cognitive computing, machine learning and artificial intelligence, that an SVP at IBM recently called “Big Data on steroids,” won’t be lightening the load. Some IT professionals say these technologies will require major infrastructural changes down to the level of the central processing unit. So are they designing a new, super-speed CPU? Nope, they’re all abuzz about GPUs — graphic processing units. Aren’t those for video games or something?

To address these issues, and more, SiliconANGLE Media and NVIDIA Corp. held a special event, called The Future: AI-Driven Analytics, An Evening of Deep Learning (in conjunction with the BigDataNYC 2016 event), that included keynotes and panel sessions.

Scott Wiener, Board of Advisors at SQream Technologies, said that the synergistic boost that GPUs can give to CPUs has been known to technologists for years now. However, he lamented that they’re potential is not better known outside geek circles.

“I don’t think we have done a good job of communicating to the market just how capable these systems are right now,” he told Peter Burris (@plburris) of theCUBE, from the SiliconANGLE Media team, and moderator of tonight’s panel on GPU accelerated databases, analytics and visualization, AI, ML, Spark, and Next-Gen apps.

“We think GPUs should be everywhere,” Wiener said, going on to explain how GPU computing is going to change the game.  “You get a certain number of a transistors on a chip, and we used to be able to just make them go faster — just crank a knob, and they would get faster; every time you’d buy a new computer, it would be twice as fast. And that hasn’t happened in quite a while,” Wiener said.

In a parallel processing universe

The imperative now is to find new ways to solve problems of processing speed without having to build CPUs to totally impractical new standards. Wiener said, “We need ways to  solve problems in new ways, in parallel rather that sequentially.”

In his work with companies who need to make huge jumps from, say, 20 terabytes to 200 terabytes, the GPU is, in his view, the only practical solution. He also said that GPUs have massive potential for older operations that are not scaling.”All of a sudden we need to deploy a hundred servers to process reasonable size couple of hundred terabytes of information. Why can’t we do this on a single server?”

We can, Weiner said, with GPUs. “On a 2U server, we can process  near petabyte data sets in near real time, very low latency [….] and with interactive response times for queries,” he stated.

First comes marriage

Claudio Silva, professor of Computer Science, Engineering and Data Science at New York University, said that this type of computing is “not just using GPU as a faster CPU.” In his lab, he sees processing jumps of 6,000 x. “You don’t get that just by jumping to a GPU; you get there by using the GPU in a way that you couldn’t use the CPU,” Silva clarified.

Panelist Mark Hammond, CEO and founder at Bonsai, agreed that it’s not CPU vs. GPU, but the alchemy of both. “You start to see the marriage of GPU compute and CPU compute. And really for developers and for us to see this spread really broadly,” he said. “We have to allow them to do that in such a way that they are using their domain expertise and subject matter expertise, and they’re not becoming experts in SIMD instruction sets and CUDA and everything.”

Productized processing

Bill Maimone, VP of Engineering at MapD Technologies, Inc., said these speeds aren’t a fantasy — companies are already achieving them in real life. “We have a partner, a third-party company that’s benchmarked their product — 40 billion rows in 200 milliseconds.” He echoed Hammond and said that GPU needs to be productized and made user-friendly for customers. “Put the best damned technology you can in one box — and that technology is GPUs. And if that technology can’t solve your problem, then you start adding to it,” he said.

Mark Brooks, principal systems engineer at Kinetica, spoke about how his company is “putting it all in one box.” He added: “We’re very happy about NVIDIA’s NVLink [a high-bandwidth, energy-efficient interconnect] and IBM Power Architecture [server line], which lets us pull data out of main memory, feed the GPU very, very fast, and we just have a scaled out architecture, so we can do billions of rows of  ingestion per minute and scale arbitrarily wide,” he said.

Seeing is disbelieving

Wiener wrapped up the panel with some examples of how GPU-CPU computing stacks up against competitors with his customers.

“We’ll come in and we’ll do cook-offs with these entrenched very well-known competitors, and we’ll take queries that would take hours to run on this very expensive iron, and do them in minutes or less,” he said. “And queries that would take minutes to run, we do in sub-seconds. And, really, disbelief is the primary reaction.”

Watch the complete video interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of the The Future: AI-Driven Analytics, An Evening of Deep Learning.

Photo by SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU