UPDATED 13:56 EDT / APRIL 08 2026

Vast Forward Content Program-Solidigm Feature-storage industry AI

Solidigm targets the AI bottleneck with advanced storage tech and ecosystem partnerships

The far-reaching impact of the artificial intelligence revolution has spilled over into many parts of the stack, where memory bottlenecks can slow AI inferencing and other key tasks. As SiliconANGLE’s analysts have noted in recent weeks, the “memory supercycle” is upon us, and how the technology world responds will affect much of AI’s future progress. The storage industry is poised to have a major say in how quickly solutions are deployed.

In the push to build agents, train models and drive toward “superintelligence,” it seems as though much of the enterprise world has missed an important roadblock: AI is a memory hog. A typical AI server uses roughly eight times more memory than a traditional server, and this is leading to bottlenecks in performance. To solve this problem, key players in the storage industry have been working on technologies to keep memory from being an obstacle while architecting for the future computing demands that AI will undoubtedly bring.

“This is a memory hierarchy problem,” said Alan Bumgarner, director of strategic planning for the Data Center Group and AI technologist at Solidigm, a trademark of SK Hynix NAND Product Solutions Corp., in an interview with theCUBE, SiliconANGLE’s livestreaming studio. “Think about it: The further out you go, the more dense the memory is and the slower it is. And as you get closer to the GPU, it gets smaller and faster. But this whole thing needs to start lifting and making that happen and making it work in parallel and making it work at scale. It’s going to take a lot of work.”

This feature is part of SiliconANGLE Media’s exploration of the architectural shifts powering continuous, production-grade AI. Be sure to check out SiliconANGLE’s extensive coverage of Vast Forward 2026, including interviews with executives from Vast, Solidigm and many other key industry leaders. (* Disclosure below.)

New storage industry solutions

Solidigm, previously owned by Intel Corp. as a non-volatile and solid state drive business, was created from an acquisition by South Korea’s SK Hynix in 2020. Having merged into Hynix’s non-volatile flash memory or NAND operations, Solidigm is now focused on the development of a bits-per-cell scaling approach. The company’s high-density Quad-Level Cell solution stores more data per NAND package at improved power efficiency.

The company’s executives have described Solidigm’s advantage from an investment in a floating gate method for NAND flash cells. Each cell uses a well isolated conductive “floating gate” to hold electrons, which helps push to higher bits-per-cell and larger capacities. This is also designed to prevent the potential for disrupting adjacent cells, an important consideration when massive amounts of data storage are now required for AI.

Solidigm is also launching more robust solid state drive technology as well. The firm unveiled a 122 terabyte SSD last fall, which can store the equivalent of the Beatles’ entire song catalog … more than 144,000 times over.

“What our engineers are doing in the lab is battling the laws of physics to try to get to that next density point,” explained Ace Stryker, director of AI and ecosystem marketing at Solidigm, in a recent conversation with theCUBE. “We launched 122 terabyte SSD last year; we’ve announced our ambitions to double that in the near future. That has very real implications for energy efficiency, which we hear from our customers is the key constraint and the key concern.”

Meeting inferencing demand

Solidigm’s go-to-market strategy is being driven, in part, by surging demand for AI inferencing. As Nvidia Corp. Chief Executive Jensen Huang emphasized in his keynote remarks during GTC in San Jose last month, artificial intelligence inference, the process of getting answers from AI models, has reached an inflection point and the AI factory will soon be driving much of the global economy.

This development has followed the model training phase for AI. Now that the world’s data has been absorbed into foundation models, the focus for Solidigm has been on how to satisfy significant inferencing demand.

“You had these model developers saying, ‘Well, there’s no more public data really available on the planet to throw at these models, so we’ve got to work with what we have,’” Stryker told theCUBE. “But what we’ve seen since then is an explosion, not on the training side, but on the inference side, really driving massive demand for DRAM and NAND bits.”

This demand has led to an intersection of the storage world with key processor technology providers such as Nvidia. The chipmaker’s introduction of basic context memory infrastructure in its Vera Rubin SuperPod has raised the curtain on a new “flash multiplier” for inference. This means that GPU deployments will drive demand for high-density, power-efficient SSDs deployed inside (or immediately adjacent to) the pod.

As Solidigm’s Stryker noted, the emphasis placed on inferencing is driving a rearchitecting of storage infrastructure.

“You have some elements of the data pipeline within inference that are really generating a ton of incremental data as well,” he said. “All that has a storage cost; it’s got to live somewhere. These models with these context windows that are just growing and growing, and these longer loops, more iterations in a given interaction with a model, all of that has incredible storage implications.”

Flash memory drives Pixar animation

The growing challenges and complexity of how storage deals with AI might seem far removed from daily life, but this is actually playing out in movie theaters, TV screens and mobile streaming platforms in the form of “Toy Story” animated characters such as Woody and Buzz Lightyear.

Pixar Animation Studios, creator of multiple popular animated action films over many years, depends on a storage solution jointly developed between Vast Data and Solidigm to process billions of pixels and textures. This must happen at a speed for Pixar animators to iterate in real time without being stalled by slow-moving hardware.

“When the storage hangs, basically the entire render process and all the interactive processes can stop working,” explained Eric Bermender, head of data infrastructure and platforms at Pixar, in a recent conversation with theCUBE. “Then the last thing you want is artists outside playing Frisbee, because everyone is waiting for the storage to come back online. But we’ve had great collaborations with our partners, with [Vast Data], to try to harden that storage as much as we possibly can.”

Solidigm’s flash memory solution plays a key role in Pixar’s animation as well. The firm’s advanced flash design enables Pixar’s artists to render frames without latency, as close to real-time as possible.

“The reason why the flash layer is very important to us is because latency is one of what our artists are most sensitive to,” said Bermender. “The performance has to be guaranteed. We have to hit a certain less than this millisecond latency in order for the artist to feel like they’re iterating at the speed they want to iterate at.”

Partnerships shape AI infrastructure

The collaboration between Vast and Solidigm on Pixar’s storage solution highlights how the two firms have worked together closely to rearchitect compute infrastructure for demanding enterprise needs. Vast systems start at hundreds of terabytes of flash storage and they rely on Solidigm QLC SSDs to help them provide the capacity that customers require.

This has become especially important with the demand placed on compute infrastructure for enterprise AI.

“Relationships like the Vast–Solidigm partnership are important and timely because AI infrastructure is running into the hard reality that data movement is increasingly becoming a cost issue,” said Dave Vellante, co-founder and chief analyst of theCUBE Research. “With flash supply tightening and costs under pressure, the winners will be the platforms that can squeeze more useful I/O out of every NAND dollar and keep GPUs fed without wasting cycles. In our view, optimizing flash efficiency and the end-to-end data path is quickly becoming a first-order design requirement for AI factories.”

Solidigm has also relied on other strategic partnerships to advance storage technologies. The launch of AI Central Lab in October, in collaboration with MinIO Inc., united Solidigm’s SSDs with MinIO’s object storage to provide a platform that boosts GPU efficiency.

“We’ve had a fundamental shift in the way we do SSD benchmarking,” said Avi Shetty, senior director of AI enablement and partnerships at Solidigm, in an appearance on theCUBE. “The storage is talking to the GPUs at 800 gigabit with Solidigm SSDs and then with our partners like MinIO, [they] will manage that cluster for us.”

Solidigm has also partnered with platform builder AIC Inc. to integrate SSDs into AIC’s server and storage chassis to strengthen data-intensive processing for AI inference. The collaboration offers yet another example of how Solidigm has positioned its product portfolio to meet the ever-shifting needs of a rapidly evolving AI market.

“The name of the game has really been how you do more with the resources that are available to us,” said Solidigm’s Stryker. “From a Solidigm point of view, that involves fine-tuning the product roadmap to make sure that we have different swim lanes of products to serve needs and to be able to deliver as much as possible to the market in a constrained environment. I view Solidigm’s biggest imperative here as continuing to apply a lot of effort to understand what’s going on in these workloads and make sure that we’re delivering products that serve those well.”

Image: SiliconANGLE/ChatGPT

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.