UPDATED 11:55 EDT / OCTOBER 08 2025

Dave Driggers, chief executive officer and chief technology officer of Cirrascale Cloud Services, talks with theCUBE about neocloud and infrastructure economics at AI Factories – Data Centers of the Future 2025 CLOUD

Neocloud strategies reshape the future of compute efficiency

As enterprises push into next-generation computing, the old rules of cloud no longer apply. In their place comes what has been dubbed the neocloud, a specialized layer of infrastructure built for scale and security, engineered to handle workloads well beyond yesterday’s limits.

Dave Driggers, CEO and CTO of Cirrascale Cloud Services, speaks with theCUBE about neocloud and infrastructure economics at AI Factories – Data Centers of the Future 2025

Cirrascale CEO and CTO Dave Driggers discusses neocloud economics with theCUBE at theCUBE + NYSE Wired: AI Factories – Data Centers of the Future.

At the core of an ever-demanding infrastructure market is a divide. Training massive artificial intelligence models requires cutting-edge hardware, while inference demands something different: cost efficiency, low latency and the ability to scale resources closer to where people live and work. That divide is what makes neocloud infrastructure appealing for balancing the needs of very different models, according to Dave Driggers (pictured), chief executive officer and chief technology officer of Cirrascale Cloud Services.

“All models are not just created equal,” he said. “You’ve got small models, medium-sized models, large models and gigantic models … The bigger the model, the bigger the hardware that needs to be utilized. The smaller the model, the only way to drive the cost and the performance and scale is through smaller hardware.”

Driggers spoke with theCUBE’s Dave Vellante at theCUBE + NYSE Wired: AI Factories – Data Centers of the Future event, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how repurposing hardware from training to inference underpins neocloud economics.

Neocloud strategies for AI infrastructure

Neocloud — specialized, vertically integrated compute and networking built for AI workloads — is carving out a niche by optimizing the lifecycle of graphics processing units. That means rethinking how systems are built for high-intensity training — which are often refreshed every 12 to 18 months as Nvidia Corp. accelerates its AI GPU release schedule — so they can later find a second life powering inference. Even if the GPUs aren’t the most efficient for smaller models, once depreciated they deliver cost advantages that make reuse practical, according to Driggers.

“Our goal and our requirement is we have to build to repurpose that equipment into inferencing,” he said. “It’s the second life; the long tail for that equipment. It may not be ideal for inferencing when it first launches, but once it gets depreciated and in its second life, our main thing we deal with it is repurpose it toward inferencing.”

That model of repurposing doesn’t just serve providers: It defines the limits of what enterprises can realistically take on themselves. Building large-scale, real-time systems internally is costly and inefficient, leaving companies reliant on neocloud providers that can balance the very different demands of training and high-stakes inference, Driggers explained.

“With training, I may have one person running 1,000 GPUs by themselves,” he said. “[If the] network gets cut, nobody even notices it. When it’s inferencing, that network is mission critical; totally different animal.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of theCUBE + NYSE Wired: AI Factories – Data Centers of the Future event:

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.