INFRA
INFRA
INFRA
Chipmaker SambaNova Systems Inc. unveiled its most advanced artificial intelligence processor today as it closed on a bumper $350 million late-stage round of funding from Vista Equity Partners, Cambium Capital and others.
Alongside the funding, SambaNova said it’s going to collaborate with Intel Corp. on the development of new, high-performance and cost-effective systems for AI inference. It’s intended to give enterprises an alternative to the graphics processing units that power most workloads today.
SambaNova’s $350 million Series E round saw “strong participation” from Intel’s investment arm, Intel Capital. A host of other investors joined in too, including Assam Ventures, Battery Ventures, Gulf Development Public Company Limited,, Mayfield Capital, QIA, Saudi First Data, Seligman Ventures, T. Rowe Price, &E, 8Square, Atlantic Bridge, BlackRock, GV, Nepenthe, Nuri Capital and Redline Capital.
SambaNova has positioned itself as a rival to Nvidia. It develops high-performance computer chips that lend themselves to AI model training and inference. Its chips can be accessed via the cloud or deployed on-premises in its own hardware appliances. One of its biggest selling points is power efficiency: SambaNova claims that its chips can generate more tokens per kilowatt hour than comparable processors from rivals.
The new SN50 chip announced today promises to improve inference performance dramatically. According to SambaNova, it delivers five times more compute and four times greater networking bandwidth than its previous-generation SN40 chipset. Customers will be able to link up to 256 accelerators over a blazing fast, multi-terabit-per-second interconnect, enabling them to support much bigger and longer-context AI models with greater throughput and responsiveness, without escalating their compute costs, the company said.
SambaNova said the SN50 differs wildly from Nvidia’s GPUs. Technically speaking, it’s a Reconfigurable Dataflow Unit or RDU, essentially a specialized AI accelerator chip that’s more similar to Google LLC’s tensor processing units or Amazon Web Services Inc.’s Trainium chips. It’s designed for high-performance training and inference of massive, trillion-parameter large language models.
The SN50 is based on a three-tier memory architecture that can support AI models with up to 10 trillion parameters and 10 million context lengths, enabling deeper reasoning and more intelligent autonomous systems than previously possible. It also claims a lower cost per token, thanks to its resident multimodel memory and agentic caching capabilities that optimize power efficiency.
The SN50 is targeted at applications such as AI voice assistants that demand ultra-low latency to run in real time. It’ll be able to power thousands of simultaneous sessions, it said.
“AI is no longer a context to build the biggest model,” said SambaNova co-founder and Chief Executive Rodrigo Liang. “The real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud.”
Liang previously appeared on SiliconANGLE Media’s livestreaming studio theCUBE, where he spoke in depth about the company’s RDU architecture and why it excels at AI inference:
Liang said SambaNova says it can already light up data centers now, but in order to continue doing so in the future, it has decided to work much more closely with Intel.
The collaboration sees Intel invest in the startup to accelerate the deployment of a new, Intel-powered “AI cloud” that’s based on the existing SambaNova Cloud platform. Intel will enhance SambaNova Cloud its Xeon central processing units to help create a more efficient infrastructure that’s optimized for multimodal large language models.
Intel’s Xeon CPUs excel at general-purpose operations and managing system operations, while the SN50 is optimized for the rapid processing of large datasets and performing complex calculations. Combining them in a single cloud would allow more efficient task distribution, improving latency, throughput and the overall performance of AI workloads.
Intel said it will be able to accelerate SambaNova’s cloud expansion in other ways too, by providing reference architectures and deployment blueprints, and through its partnerships with software vendors and systems integrators. Once it’s ready, Intel and SambaNova plan to co-market and co-sell the new platform by leveraging Intel’s existing relationships with enterprises and its partner channels.
The partnership holds a lot of promise for both companies. SambaNova can benefit from Intel’s global reach and manufacturing base to scale its AI processors, while Intel is getting the chance to finally make its mark on a market that has largely passed it by. Until now, Intel has been unable to compete with Nvidia and other chipmakers, such as Advanced Micro Devices Inc., in the AI industry. SambaNova’s powerful SN50 chips, coupled with Intel’s Xeon processors, can potentially change that story.
Constellation Research analyst Holger Mueller said it’s still possible for Intel, with the help of SambaNova, to make a splash in the AI chip market. “Nvidia gets all of the attention and has most of the market share, but AI models don’t actually care about who makes the chip they’re running on,” he said. “They care about performance. If SambaNova and Intel’s inference platform is competitive, the biggest challenge will be to show companies that’s the case and convince them to use it, instead of Nvidia’s GPUs.”
It’s thought that the companies have been planning this collaboration for some time. Indeed, reports emerged in December that Intel was even considering buying SambaNova outright. Bloomberg said the company was mulling an offer in the region of $1.6 billion. It’s not known if Intel actually tabled such an offer, but it seems unlikely SambaNova would have agreed, for the amount was only a third of what it was valued at following its previous funding round in 2021.
Kevork Kechichian, executive vice president and general manager of Intel’s data center group, said there’s a fantastic opportunity in the AI data center market. “Customers want more choice and more efficient ways to scale AI,” he said. “By combining Intel’s leadership in compute, networking and memory with SambaNova’s full-stack AI systems and inference cloud platform, we are delivering a compelling option for organizations looking for GPU alternatives.”
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.