UPDATED 15:00 EDT / OCTOBER 10 2018

AI

DDN and Nvidia partnership powers the AI data center

Businesses confronted with rolling out an artificial intelligence project often stare at the assembled pieces and wonder what kind of engine will be needed to power the car. Data scientists? Check. AI initiative? Check. Deployment infrastructure? Uh-oh.

DataDirect Networks Inc. has announced a partnership with Nvidia Corp. to make AI deployments simpler. DDN’s new reference architecture marries Nvidia’s DGX-1 AI servers with DDN’s parallel file storage systems.

“It is a full rack-level solution, a reference architecture that’s been fully integrated and fully tested to deliver an AI infrastructure simply and completely,” said Kurt Kuckein (pictured, left), senior director of marketing at DDN. “That’s what we’ve made easy with Accelerated, Any-Scale AI [A³I], to be able to scale that environment seamlessly within a single name space so that people don’t have to deal with a lot of tuning and turning of knobs to make this stuff work really well and drive those outcomes that they need.”

Kuckein spoke with Peter Burris (@plburris), host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, at theCUBE’s studio in Palo Alto, California. He was joined by Darrin Johnson (pictured, right), global director of technical marketing for enterprise at Nvidia, and they discussed how the latest solution will improve runtime use of deep learning tools, boosting productivity for data scientists, and the importance of streamlining data delivery for enterprise applications. (* Disclosure below.)

Shorter runtimes for deep learning

DDN has indicated that deep learning frameworks, such as Caffe or TensorFlow, will have shorter runtimes for image throughput when running on the Nvidia’s DGX-1 servers. The goal is to allow data scientists to focus on algorithms that will generate tangible benefits for the business rather than having to configure systems. Nvidia’s partnership with DDN was followed by the news today that it would launch a new acceleration platform for AI.

“Data scientists don’t want to understand the underlying file system, networking, remote direct memory access, InfiniBand, any of that,” Johnson said. “They just want to be able to come in, run their TensorFlow, get the data, get the result. This solution helps bring that to customers much more easily so those data scientists don’t have to be system administrators.”

DDN’s partnership with Nvidia is designed to offer customers end-to-end parallel architecture with the lowest latency and highest throughput for feeding critical data to enterprise applications.

“In the end, it’s the application that’s most important to both of us,” Kuckein said. “It’s making the discoveries faster. It’s processing information out in the field faster. It’s doing analysis of the MRI faster.”

Watch the entire video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s CUBE Conversations. (* Disclosure: DataDirect Networks Inc. sponsored this segment of theCUBE. Neither DDN nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU