UPDATED 16:53 EDT / MARCH 09 2026

Felix Ejeckam, co-founder and CEO of Akash Systems, talks to theCUBE about how diamond cooling is pushing compute capacity further without expanding energy supply during theCUBE + NYSE Wired: AI Factories - Data Centers of the Future interview series. INFRA

Diamond cooling emerges as a new lever for AI data center efficiency

AI infrastructure is running into a hard physical barrier, and diamond cooling is emerging as a new way to push more compute through existing power limits.

As AI systems grow larger and hotter, the challenge is no longer simply installing more GPUs but managing the heat and electricity they demand. Companies such as Akash Systems are applying diamond-based thermal materials directly to server GPUs, including those built with Nvidia chips, to lower temperatures and squeeze more work out of the same data center power footprint, according to Felix Ejeckam (pictured), co-founder and chief executive officer of Akash Systems.

“Akash Systems is a deep tech company based in the Bay Area that has solved the heat problem in data centers and AI,” Ejeckam said. “Your listeners have probably heard untold stories of the challenges with the limited supply of energy in the data center world. Not only is the limited supply of energy a problem, but also the efficient use of that energy is a major challenge. At Akash Systems, what we have done is use the world’s most thermally conductive material, diamond, to solve that problem so that existing users and new users can efficiently use their energy for their needs in compute.”

Ejeckam spoke with theCUBE’s Gemma Allen for theCUBE + NYSE Wired: AI Factories – Data Centers of the Future interview series, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how Akash Systems is using diamond cooling technology to address energy and heat constraints in AI data centers.

Diamond cooling and the limits of AI infrastructure

Thermal management is quickly becoming one of the defining engineering challenges of large-scale AI infrastructure. GPUs used for training and inference generate significant heat, forcing operators to dedicate large amounts of electricity to cooling rather than computation. Akash Systems positions diamond cooling as a way to reduce those thermal constraints and reclaim capacity inside existing data centers, Ejeckam emphasized.

“Today we take diamonds, synthetic diamond, lab-grown diamonds and we apply it to the GPU in a server,” he said. “The GPU is the hottest chip in a typical server and that brings a temperature down. The application of the diamond brings it down by 10, 15 degrees Celsius. That then leads to opportunities.”

Lower chip temperatures translate into both operational savings and additional compute capacity. Data center operators typically allocate large portions of their available power budget to cooling systems. Reducing that demand can allow operators to redirect energy toward running more workloads rather than building new infrastructure, Ejeckam noted.

“The fact that you don’t need as much power to cool down that server, we take that extra power and we can give it to the operator either in savings or in additional computes that you can fill back into that data center,” he added. “Typically speaking, we’re talking about a million dollars is how much our technology gives to every operator of a server per server.”

The pressure to extract more output from existing energy supply is shaping how infrastructure companies think about AI expansion. With power availability constrained in many regions, technologies that increase efficiency can have an outsized impact on how quickly operators scale. Akash Systems has begun deploying diamond-cooled servers using GPUs from vendors including Nvidia and AMD, reflecting how the approach can integrate into the broader hardware ecosystem.

“It is ultimately energy. The compute problem is an energy problem,” Ejeckam explained. “There is a fixed supply of energy in the world and so there is a mad dash rush to go and grab that energy and make the best of it. We’re saying to participants in the marketplace, ‘You’ve got a fixed amount of energy for your data center, a megawatt, a gigawatt, whatever. Rather than go build another gigawatt plant, how about we help you double the capacity in that existing infrastructure?’”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of theCUBE + NYSE Wired: AI Factories – Data Centers of the Future interview series:

Image: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.