UPDATED 11:00 EST / SEPTEMBER 14 2022

AI

SambaNova Systems gives its integrated AI hardware and software platform a massive performance boost

Artificial intelligence startup SambaNova Systems Inc. today announced a revamped version of its flagship DataScale system today, with a next-generation processor that it says massively enhances performance and provides support for much larger machine learning models than before.

SambaNova is an extremely well-funded startup that has designed and built an integrated hardware and software platform for running AI and deep learning workloads from the data center to the edge. The company says that by integrating both hardware and software that’s optimized for AI, it creates a reconfigurable “dataflow” architecture that enables applications to drive optimized hardware configurations. It means that the underlying software is not constrained by the limitations of fixed hardware.

The first edition of DataScale was launched in December 2020, powered by customized seven-nanometer chips. SambaNova says they’re more attuned to machine learning and deep learning processes than the general-purpose central processing units and graphics processing units that power most AI workloads.

The reconfigurable Dataflow architecture runs an open-source software stack known as SambaFlow that ensures each machine learning model runs optimally on the system. It does this by minimizing the need to interface with the dynamic random-access memory. That eliminates a key bottleneck in AI, namely the interconnect between processors and the memory.

SambaNova’s DataScale platform is available to buy or rent through its DataFlow-as-a-service offering. For customers that choose the latter option, SambaNova will install and maintain the DataScale system in their on-premises data center, with use-based pricing.

The revamped version of DataScale announced today is said to provide a number of improvements, including AI model training speeds that are even faster than Nvidia Corp.’s widely-used DGX A100 systems. It contains SambaNova’s next-generation processor, an enhanced Cardinal SN30 Reconfigurable Dataflow Unit. It can also support much larger AI models than before, with 12.8 times more memory capacity than the DGX A100, SambaNova said.

SambaNova said the new system is available on a subscription pricing basis only through Dataflow-as-a-service. The company will install and maintain the DataScale system within the customer’s on-premises data center, and charge them based on how often it’s used.

“The new DataScale SN30 system achieves world record-breaking performance when compared to the latest DGX A100 systems,” said Marshall Choy, senior vice president of product at SambaNova Systems. “With this release, SambaNova is also offering 100% subscription pricing for DataScale and Dataflow-as-a-Service, enabling organizations to achieve ROI faster, reduce risk, and scale more cost-effectively than with any other AI infrastructure.”

The company said enterprises are increasingly adopting AI to power a wide range of business applications. As such, it believes it makes sense to move away from tactical AI deployments to a more scalable, enterprise-wide solution. That’s exactly what SambaNova claims to offer with DataScale, consolidating “AI sprawl” into foundational models that can be trained once and reused across an organization.

SambaNova is somewhat unique in the AI space with its combination of hardware, software and solutions, explained Andy Thurai, vice president and principal analyst of Constellation Research Inc. “Given that the experimentation phase is over, most enterprises are now looking to produce scalable, enterprise-wide AI solutions that are easily trainable,” he said. “SambaNova’s Reconfigurable Dataflow Unit chips are slightly different to general-purpose CPUs and AI-specific GPUs that most AI initiatives rely on, so these can be a big differentiator, with the company claiming that they are multitudes better than Nvidia’s DGX A100 systems.”

Another benefit of SambaNova’s hardware and software combo is that it provides enterprises with high-performance data processing and model training capabilities with lower power consumption than rival offerings, the analyst added. He explained that high-performance computing workloads have always been a problem for enterprises, because they’re notoriously expensive to train and run.

“Most times, enterprises need to buy a whole solution set to run HPC workloads,” Thurai said. “With SambaNova’s subscription-based offering and its better price/performance ratio, it provides an alternative to Nvidia and specialist HPC providers. Only time will tell if it can win.”

SambaNova Systems is certainly competing in a tough industry, but it does at least boast multiple happy customers, including Lawrence Livermore National Laboratory.

“We look forward to deploying a larger, multirack system of the next generation of SambaNova’s DataScale systems,” said Bronis de Supinski, chief technology officer of Livermore Computing at LLNL. “Integration of this solution with traditional clusters throughout our center will enable the technology to have a deeper programmatic impact. We anticipate a two to six times performance increase, as the new DataScale system promises to significantly improve overall speed, performance and productivity.”

Photo: SambaNova Systems

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU