Navigating the AI and HPC resurgence: Hammerspace’s role in redefining data orchestration
Artificial intelligence and high-performance computing are becoming popular again post-COVID.
Hammerspace Inc. is at the intersection of this perfect storm, which is driving demand for its services, according to the company’s CEO, David Flynn (pictured).
Applications and their data used to have a one-to-one relationship with storage. Now, data from multiple applications needs to be orchestrated to run in different places for the availability of specialized hardware, requiring a shared compute infrastructure. Data orchestration is a new approach to managing data that allows for movement and access without disrupting the unified identity of the data, Flynn explained.
“The old mantra of move the compute to your data because data gravity won’t work … we have to orchestrate data to where you can do the compute, which … is very bursty too,” Flynn said. “That really ought to be shared compute infrastructure. So, that moves from a one-to-one relationship to a many-to-many relationship. That cross product is really what is forcing folks to go to a data-orchestrated world instead of the old world of having data in storage.”
Flynn spoke with theCUBE industry analysts Savannah Peterson and David Nicholson at SC23, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how AI and HPC technologies are revolutionizing data storage and retrieval, enabling global data orchestration and collaboration. (* Disclosure below.)
Making data permanent
Hammerspace was installed in a movie studio, doubling the rendering capability and being used for the production of Disney’s “Mandalorian” and Netflix’s “Stranger Things,” according to Flynn.
“We had a movie studio with 600 artists, 300 nodes in a render farm with an Isilon system that was maxed out, able to install Hammerspace in an afternoon,” Flynn said. “They had the whole studio up and running and the render farm — and it doubled the rendering capability. They were able to go to a 600-node render farm with the same Isilon.”
Data orchestration is a new paradigm for making data permanent by ensuring it is always in motion and can outlive storage, overcoming constraints and bottlenecks in infrastructure, Flynn explained. The bottleneck of moving data between data centers is eliminated through orchestration using policy-based push and reactive pulling to accommodate “bursty” workloads. Data can be accessed and moved at a granular level within the file system, allowing for efficient retrieval and manipulation without changing the organizational structure.
“What we at Hammerspace did was we fixed the NFS protocol — we introduced NFS 4.2. My CTO is the kernel maintainer of the NFS client stack in Linux. We made it tricked out,” Flynn said. “It is as high a performance or more than these exotic parallel file systems. That allows us to solve the how to deliver data at the more local scale from the storage systems into the computer array into your GPUs with GPU direct or into your special processors, your Tensor processors, all of that directly.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of SC23:
(* Disclosure: TheCUBE is a paid media partner for SC23. Neither Dell Technologies Inc., the main sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU