Data processing unit offers new architectural solution for cloud data center networks
One of the maxims in the technology world is that much of what powered innovation over the past decade will likely be replaced by something new in the current decade. The rise of the data processing unit provides a prime example of this reality.
By more efficiently executing data-centric computations within server nodes, the DPU offers the potential for significant improvement in next-generation cloud architectures. This will be an important development because the critical functions of network, storage, virtualization and security have outstripped the capabilities of general-purpose central processing units.
“CPUs are not good at executing these data-centric computations, and in a compute centric cloud architecture, the interactions between server nodes are very inefficient,” said Pradeep Sindhu (pictured), founder and chief executive officer of Fungible Inc. “What we are looking to do at Fungible is to solve these two basic problems.”
Sindhu spoke with Dave Vellante, host of theCUBE, SiliconANGLE Media’s livestreaming studio, during theCUBE on Cloud event. They discussed the need to address processing issues within CPU architecture, the use of high-level code as a solution, improving cloud network efficiency and how the DPU is different from other processor offerings in the enterprise.
Playing traffic cop
At the heart of the workload processing dilemma is the reality that cloud data center servers are built using general-purpose x86 CPUs. This architectural model depends on being able to scale out identical or near-identical servers, all connected to a standard IP ethernet network.
That might have been adequate in a time before cloud computing required the processing of data-heavy workloads, but artificial intelligence applications, which rely on vast amounts of information, have changed the game dramatically. CPUs are now being asked to run applications and direct traffic for I/O, according to Sindhu.
“The architecture of these CPUs was never designed to play traffic cop,” Sindhu said. “You’re interrupting the CPU many millions of times a second. It’s critical that in this new architecture where there is a lot of data, a lot of east-west traffic, the percentage of workload which is data-centric has gone from maybe 1% to 2% to 30% to 40%.”
Fungible’s solution significantly boosts the threads or high-level code executed by a processor. There are at least 1,000 different threads inside the DPU to address the need for concurrent computations, according to Sindhu, and the company has strengthened chip transistors as well.
“Our architecture consists of very heavily multithreaded general-purpose CPUs combined with very heavily threaded specific accelerators,” Sindhu said. “We’ve improved the efficiency of those transistors by 30 times.”
Boosting network utilization
In addition to building a suitable replacement for the CPU, technologists must also address another issue inherent in current IP ethernet-based networks. Utilization rates are often inefficiently low, according to Sindhu, so his company is pursuing a solution using Fabric Control Protocol.
As described in a patent filing, FCP sprays individual packers for a given data flow across multiple paths in a data center switch fabric.
“We were trying to solve the specific problem of data-centric computations and improving node-to-node efficiency,” Sindhu said. “When you embed FCP in hardware on top of a standard IP ethernet network, you end up with the ability to run at very large scale where the utilization of the network is 90% to 95%, not 20% to 25%.”
The role of the DPU in potentially solving issues around network efficiency and computational power has similarities to another trend in processor technology for the enterprise world. The growth of smart network interface card, or SmartNIC, architectures has been a trend over the past several years. The SmartNIC is an embedded microprocessor that offloads functions from the host.
At least 10 vendors have launched SmartNICs since 2017, including VMware Inc.’s re-architecture of the hybrid cloud through Project Monterey. However, Sindhu is careful to note the difference between the two technologies.
“A SmartNIC is not a DPU,” Sindhu said. “It’s simply taking general purpose Arm cores, putting in a network and PCI interface, integrating them all on the same chip and separating them from the CPU. It solves the problem of the data-centric workload interfering with the application workload, but it does not address the architectural problem of how to address data-centric workloads efficiently.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of theCUBE on Cloud event:
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU