AI
AI
AI
Businesses confronted with rolling out an artificial intelligence project often stare at the assembled pieces and wonder what kind of engine will be needed to power the car. Data scientists? Check. AI initiative? Check. Deployment infrastructure? Uh-oh.
DataDirect Networks Inc. has announced a partnership with Nvidia Corp. to make AI deployments simpler. DDN’s new reference architecture marries Nvidia’s DGX-1 AI servers with DDN’s parallel file storage systems.
“It is a full rack-level solution, a reference architecture that’s been fully integrated and fully tested to deliver an AI infrastructure simply and completely,” said Kurt Kuckein (pictured, left), senior director of marketing at DDN. “That’s what we’ve made easy with Accelerated, Any-Scale AI [A³I], to be able to scale that environment seamlessly within a single name space so that people don’t have to deal with a lot of tuning and turning of knobs to make this stuff work really well and drive those outcomes that they need.”
Kuckein spoke with Peter Burris (@plburris), host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, at theCUBE’s studio in Palo Alto, California. He was joined by Darrin Johnson (pictured, right), global director of technical marketing for enterprise at Nvidia, and they discussed how the latest solution will improve runtime use of deep learning tools, boosting productivity for data scientists, and the importance of streamlining data delivery for enterprise applications. (* Disclosure below.)
DDN has indicated that deep learning frameworks, such as Caffe or TensorFlow, will have shorter runtimes for image throughput when running on the Nvidia’s DGX-1 servers. The goal is to allow data scientists to focus on algorithms that will generate tangible benefits for the business rather than having to configure systems. Nvidia’s partnership with DDN was followed by the news today that it would launch a new acceleration platform for AI.
“Data scientists don’t want to understand the underlying file system, networking, remote direct memory access, InfiniBand, any of that,” Johnson said. “They just want to be able to come in, run their TensorFlow, get the data, get the result. This solution helps bring that to customers much more easily so those data scientists don’t have to be system administrators.”
DDN’s partnership with Nvidia is designed to offer customers end-to-end parallel architecture with the lowest latency and highest throughput for feeding critical data to enterprise applications.
“In the end, it’s the application that’s most important to both of us,” Kuckein said. “It’s making the discoveries faster. It’s processing information out in the field faster. It’s doing analysis of the MRI faster.”
Watch the entire video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s CUBE Conversations. (* Disclosure: DataDirect Networks Inc. sponsored this segment of theCUBE. Neither DDN nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.