IBM’s evolving storage solutions aim to pave the way for AI and hybrid cloud environments
Years ago, IBM Corp. had an event called IBM Edge. Things have certainly changed since then, whether one considers the rise of artificial intelligence in hybrid cloud environments or even the names of various products.
Take the platform originally released as General Parallel File System, later renamed Storage Spectrum Scale, and more recently, IBM Storage Scale. The reason why that’s important is that the base technology hasn’t changed but the focus is constantly evolving and being enhanced, according to David Wohlford (pictured, left), worldwide senior product marketing manager of IBM Storage for AI and cloudscale at IBM.
“It’s really a high-performance parallel file and object system that has a global data platform,” he said. “That’s really what it is. It’s a whole idea of connecting data. It’s not just high performance, but it’s really this connectivity. It’s part of a global platform and a portfolio of solutions that we basically cover with our file and object.”
Wohlford and John Zawistowski (right), global systems solutions executive at Sycomp, A Technology Company Inc., discussed those details and more with theCUBE industry analysts Dave Vellante and Rob Strechay at IBM Storage Summit, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. (* Disclosure below.)
The strategy at hand
Most of the world’s data is likely stored in file format, but object is the hot growth area. Given that, bringing these two worlds together is very important for companies. Most of the cloud data is an object. But if that’s where the data is, one is going to be bringing AI to it. So, what’s the thinking and strategy around using AI to leverage all of that data at scale? IBM Storage’s Fusion solution is the answer, according to Wohlford.
“Fusion kind of brings things together, also more from the OpenShift or the container platform. And it’s actually built on similar technologies with IBM Storage Scale, as well as IBM Storage Ceph,” Wohlford said. “What Storage Scale and IBM Storage Ceph really bring is the platform and bringing it together. We offer basically the file and object, bringing these two protocols together onto a single platform.”
Still, even though a lot of things are in object, they’re still files. If one looks at things such as Parquet files, they’re on object storage if one is using something such as Apache Spark. So, from the customer aspect, the question remains: Where is this being deployed from? Is it from people looking to understand how to bring all their data together under one kind of namespace?
“That’s the nice part about storage scale, right? It is a global namespace. It is the place where you can handle all the data that you need. And the fact that a lot of data is stored in object, object isn’t necessarily where you’re going to get the boost in the performance,” Zawistowski said. “You’re going to need to hydrate a scale cluster in order to feed those GPUs and feed that AI workload and get that to where you want it to be.
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the IBM Storage Summit:
(* Disclosure: IBM Corp. sponsored this segment of theCUBE. Neither IBM nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU