

Artificial intelligence computing infrastructure startup d-Matrix Corp. today unveiled a custom network card named JetStream designed from the ground up to support high-speed, ultra-low-latency AI inference for data centers.
As the adoption of generative AI increases, particularly with larger and more complex models that can reason and produce multimodal content, data centers are increasingly distributing these models across multiple machines. This distribution necessitates not only powerful computing capabilities for each machine but also high-speed networking between them.
“JetStream networking comes at a time when AI is going multimodal, and users are demanding hyper-fast levels of interactivity,” said dMatrix co-founder and Chief Executive Sid Sheth. “Through JetStream, together with our already announced Corsair compute accelerator platform, d-Matrix is providing a path forward that makes AI both scalable and blazing fast.”
Corsair is the company’s line of powerful AI inference compute cards designed for standard data center rack servers. It is designed to solve challenges in economics and running AI at a large scale by providing extremely fast integrated memory that can scale better than graphical processing units, which are among the current standard for deploying AI.
AI inference is the process of using a trained model to make decisions and predictions based on new data, basically what happens when an AI model is running. Major cloud infrastructure providers, such as Microsoft Corp. and Google Cloud, have continually faced challenges in delivering ultra-fast AI at high capacity because of performance limitations.
Sheth said that now that the company is innovating cards that help resolve memory and compute bottlenecks, it’s tackling the next biggest issue: networking.
JetStream is a full-height PCIe Gen5 card that plugs directly into a standard port in a datacenter server and delivers a maximum of 400 gigabits per second. It is designed to be compatible with off-the-shelf Ethernet switches and deploy easily within a data center without the need to replace hardware or build extra infrastructure to support it.
“We did not want to build something exotic in terms of interfacing with the ecosystem and we wanted it to be aligned with the standards,” explained Sree Ganesan, d-Matrix’s vice president of product. “This is what our customers are asking for: Can I just plug-and-play with what’s in my datacenter? This is us saying: Yes, you can continue to use your standard Ethernet switches.”
The company said that, combined with its Corsair accelerators and the company’s Aviator software, the new card can achieve up to 10 times the speed and triple the cost performance at triple the energy efficiency compared with GPU-based offerings.
D-matrix said samples are available now, and full production of the cards is expected by the end of the year.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.