UPDATED 16:40 EDT / SEPTEMBER 18 2024

The Supermicro Open Storage Summit will be a chance to discuss software-led solutions.

Three insights you might have missed from Supermicro’s Open Storage Summit

The wave of artificial intelligence adoption sweeping the technology world has propelled a number of companies to prominence in 2024, as enterprises look for software-led, high-performance computing solutions.

One such firm is Super Micro Computer Inc., which, despite recent financial headwinds, has established itself as a central player in AI deployment through its data center servers and management software. Along the way, Supermicro has built a sizable ecosystem of partners that include key enterprise players Nvidia, AMD, Intel, Nutanix, Western Digital, Seagate and Micron.

Key executives from these and other partner companies weighed in during the recent Supermicro Open Storage Summit event series, covered live on theCUBE, SiliconANGLE Media’s livestreaming studio. Supermicro executives and industry partners explored how the company has been redefining infrastructure and software-led solutions for the future of enterprise AI.

The discussion focused on the importance in today’s data-driven world of choosing the right architecture to deal with the dense computing demands that GPUs place on IT infrastructure. Enterprises are looking for scale-up options to accommodate significant increases in data stores for AI processing. (* Disclosure below.)

Paul McLeod, Engineer at Supermicro, talks with theCUBE during the Supermicro Open Storage Summit.

Paul McLeod, engineer at Supermicro, talks with theCUBE during the Supermicro Open Storage Summit.

“We have this huge portfolio; some of these servers are designed specifically for compute,” said Paul McLeod, senior field application engineer at Supermicro, in an interview during the event. “You get GPUs and things like that that can be very dense compute, and [for] other applications like storage, you’ll want to have something that is very dense. One of the things that I think differentiates our product from the others is that … it’s really designed for a scale-up type of architecture.”

Here are three key insights you might have missed during the event:

1.  PCIe is becoming a popular choice for server storage technology.

Peripheral Component Interconnect Express, or PCIe, has been available since the early-1990s, and the recent AI explosion has moved it more prominently into the picture for enterprise computing.

The technology, a high-speed interface standard for connecting components in computer systems, has played a role in how Supermicro and its partners have optimized storage architectures. The storage market is undergoing a transition from hard disk drive to solid state drive or SSD technology, which impacts how users leverage servers for the processing of AI workloads.

“We’ve been working with Supermicro for several years and gained a lot of traction with PCIe 4.0 SSDs, where Supermicro drove as first to market on their end through their server platforms,” said Anders Graham, director of marketing and business development for SSDs at Kioxia America Inc., during one panel session. “Then with PCIe 5.0, Kioxia was once again first to market with our data center SSDs just recently.”

Here’s theCUBE’s complete video interview with Paul McLeod and Anders Graham, who were joined by Iyer Venkatesan at Intel Corp., Jason Zimmerman at Seagate Technology LLC, Steven Umbehocker at OSNexus Corp., and Thomas Paquette at Graid Technology Inc.:

Adoption of PCIe is also highlighting the increasing influence of high-capacity drives in data center architecture. Micron’s release of its 9550 PCIe Gen 5 SSD in July showcased the growing importance of this technology in powering GPUs for AI.

Steve Hanna at Micron Technology talks with theCUBE during the Supermicro Open Storage Summit.

Steve Hanna at Micron Technology talks with theCUBE during the Supermicro Open Storage Summit.

“It’s explicitly designed for GPU feeding, AI training and caching,” said Steve Hanna, head of product management of high-capacity NVMe SSDs at Micron Technology, during a discussion on theCUBE. “The HDD to SSD transition is a fairly recent phenomenon in these networked data lakes where before people would buy the GPUs, but they would underinvest in their storage. The data set sizes are just getting absolutely massive; you need more storage.”

Watch theCUBE’s complete video interview with Steve Hanna, who was joined by William Li at Supermicro, Rob Davis at Nvidia Corp., Shimon Ben-David at WekaIO Inc., and Jon Toor at Cloudian Inc.

Praveen Midha, director of segment and technical marketing of data center flash at Western Digital, echoed this approach in his appearance during the event. He described how in the initial stages of AI data processing, high-capacity hard disks are often used for cold storage. As the data is prepared and ingested, SSDs take over, offering the performance needed for sequential data processing. During the model training phase, high-performance SSDs, including those based on PCIe Gen 5 form factors, are essential for handling the demanding workloads.

Watch theCUBE’s complete video interview with Praveen Midha, who was joined by Wendell Wenjen at Supermicro, Christine McMonigal at Intel, and Oscar Wahlberg at Nutanix Inc.

2. Companies are building new infrastructure to support an influx of new applications and prepare for the growth of edge computing.

A key theme during the Summit revolved around how Supermicro and its partners have been focused on building new infrastructure to support an influx of applications not previously seen in traditional architectures. One example of this can be found in NVLink, a wire-based communications protocol for near-range semiconductor communications developed by Nvidia. Supermicro and Nvidia have partnered on a solution for supporting fully connected GPUs.

Rob Strechay of theCUBE talks with panel members during the Supermicro Open Storage Summit.

Rob Strechay of theCUBE talks with panel members during the Supermicro Open Storage Summit.

“Some of the different applications that we see in this space are LLMs, large language models, GNNs, graph neural networks, and RAG, for retrieval augmented generation,” said CJ Newburn, distinguished engineer at Nvidia, during a conversation on theCUBE. “Those new applications really need new infrastructure. They operate at a big scale. You can see the Supermicro NVLink rack-level integration, where that whole rack acts as one GPU, all NVLink connected.”

Watch theCUBE’s complete video interview with CJ Newburn, who was joined by Randy Kreiser at Supermicro, Balaji Venkateshwaran at DataDirect Networks Inc., and Bill Panos at Solidigm.

The development of new stacks extends to the edge where chipmaker Intel is collaborating with partners such as Nutanix to develop solutions and configurations for running hyperconverged infrastructure or HCI in edge environments.

“The edge is a very popular place to deploy HCI, because you’ve got compute and storage being deployed together, being able to manage those together and deploy remotely as well if you don’t have IT staff on site,” said Christine McMonigal, director of hyperconverged marketing at Intel, during one panel session. “The shift that I’m seeing in edge is that people want even more capable infrastructure at the edge to be able to run those analytics and AI workloads locally.”

Watch theCUBE’s complete video interview with Christine McMonigal.

This shift is leading vertical industries, such as the media and entertainment field, to redefine the edge, where geographic locations become just as significant as the devices themselves.

“Customers need to process wherever … it could be at the edge,” said Skip Levens, product marketing, AI strategy, at Quantum Corp. in an appearance on the program. “Let’s say London captures it, moves it to an object store, but then perhaps L.A. needs to pick it up. It’s really a matter of blending all of these. My favorite word for it is agility … if you have a system that kind of locks you into a certain way of working, that may work today, it won’t work tomorrow because this stuff is changing so, so quickly.”

Here’s theCUBE’s complete video interview with Skip Levens, who was joined by Sherry Lin at Supermicro, Paul Blinzer at Advanced Micro Devices Inc., and Praveen Midha:

3. Speed and simplicity are the mantra when it comes to the future of the storage industry.

Data throughput and scalability are becoming essential to organizations seeking solutions for rapidly growing applications such as generative AI and in-memory databases. One of the technologies that has emerged as a path to higher speeds and lower latencies is CXL or Compute Express Link. Since its introduction by a consortium in 2019, CXL has offered a unified interface standard for use in data center architectures.

Andrey Kudryavtsev, manager at Micron, talks with theCUBE during the Supermicro Open Storage Summit.

Andrey Kudryavtsev, manager at Micron, talks with theCUBE during the Supermicro Open Storage Summit.

“It has so many different applications, and memory application is just one of these,” said Andrey Kudryavtsev, senior manager of CXL business development at Micron Technology, during a discussion on theCUBE. “What I’m saying is not only applicable to AI, it’s applicable to all generic compute.”

Watch theCUBE’s complete video interview with Andrey Kudryavtsev, who was joined by Puneet Anand at Supermicro, Anil Godbole at Intel, and Steve Scargall at MemVerge Inc.

While technologies such as CXL address the need for speed, the storage industry is also on a quest for simplicity. Supermicro and AMD have been working with Quantum to build a storage file system – Myriad – that combines NVMe, RDMA, and Layer 3 Intelligent Fabric networking to simplify storage for the generation of AI and ML content enhancement workflows.

Partnership initiatives such as these can drive improvements in performance for AI data lifecycles, which enhance efficiency by simplifying the process to build new applications.

“As use cases and data types explode, the goal is to make it simpler and abstract away the complexity from the customer so the customer can focus on running their applications and doing what they like to do best,” said Balaji Venkateshwaran, vice president of product management at DataDirect Networks Inc., during his appearance on the broadcast. “What we are doing here is working behind the scenes to abstract away all that complexity for the customers.”

Here’s theCUBE’s complete video interview with Balaji Venkateshwaran and his fellow panelists:

To watch more of theCUBE’s coverage of the Supermicro Open Storage Summit, here’s our complete event video playlist:

(* Disclosure: TheCUBE is a paid media partner for the Supermicro Open Storage Summit event series. Neither Super Micro Computer Inc., the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Image: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU