UPDATED 16:54 EST / OCTOBER 11 2022

CLOUD

Google Cloud debuts new cloud instances with custom infrastructure processing unit

Google LLC’s cloud business today introduced a new series of cloud instances powered by a custom chip known as an infrastructure processing unit.

The new instance series, dubbed the C3 series, is currently in public preview. It’s an improved iteration of Google Cloud’s existing C2 instances, which are designed to run high-performance workloads such as artificial intelligence applications. Google says that some early customers of the new C3 series have experienced a 20% performance increase when running certain workloads.

Specialized chip

The new C3 instances run on servers equipped with specialized chips known as infrastructure processing units, or IPUs. Google Cloud developed the chips in collaboration with Intel Corp. as part of an effort that the latter company first detailed last June. This past May, Intel debuted the first IPU developed as part of the collaboration, which it refers to as the E2000.

According to Reuters, the C3 instances that were announced today are based on the E2000. Intel reportedly has the option to sell the chip to other customers besides Google Cloud.

In addition to running customer applications, a cloud provider’s servers perform auxiliary tasks such as processing network traffic. Such auxiliary tasks are usually carried out by a server’s central processing unit. Google Cloud’s custom E2000 chip offloads several auxiliary processing tasks from a server’s CPU, which improves its performance and thereby speeds up customer applications.

The E2000 is optimized primarily to perform networking tasks. The chip can encrypt the network traffic generated by a cloud application, as well as increase the speed with which the traffic reaches its destination. Intel has also equipped the E2000 with features that enable servers to move data to and from flash storage faster.

The E2000 features several different computing modules. Each module is optimized to perform a different computing task. 

According to Intel, the E2000 can be configured with up to 16 CPU cores based on Arm Ltd.’s Neoverse N1 processor design. The Neoverse N1 is optimized specifically for use in data center servers and offers 30% better power efficiency than Arm’s previous-generation chip. Since its introduction in 2019, the processor design has also been adopted by other major cloud  providers.

Intel combined the CPU cores in the E2000 with a collection of processing modules designed for networking tasks. One of the modules, which is optimized to encrypt network traffic, is based on the QuickAssist technology in Intel’s flagship Xeon line of server CPUs. There’s also a dual-core processor designed to speed up infrastructure management tasks.

Alongside its core networking features, the E2000 includes capabilities that can accelerate the movement of data between servers and storage hardware. The chip supports the NVMe-oF protocol for accessing flash storage. The protocol makes it possible to perform certain storage-related computations without leveraging a system’s operating system or CPU, which improves performance.

According to Google Cloud, the E2000 can process up to 200 gigabits of network traffic per second for its new C3 cloud instances. The chip may be used alongside the Hyperdisk block storage technology that Google Cloud debuted last month. According to the search giant, E2000 chips and Hyperdisk can together enable C3 instances to process 80% more input and output operations per second per vCPU than competitors. 

“And compared with the previous generation C2, C3 VMs with Hyperdisk deliver 4x higher throughput and 10x higher IOPS,” Nirav Mehta, Google Cloud’s senior director of product management for cloud infrastructure solutions, detailed in a blog post today. “Now, you don’t have to choose expensive, larger compute instances just to get the storage performance you need for data workloads such as Hadoop and Microsoft SQL Server.”

Sapphire Rapids 

In addition to the E2000 IPU, the new C3 instances incorporate CPUs from Intel’s fourth-generation Xeon processor line. The processor line is commonly referred to as Sapphire Rapids. It’s expected to launch early next year and will include CPUs with upwards of dozens of cores.

Intel recently detailed that chips in the Sapphire Rapids series will feature several onboard accelerators, circuits optimized to perform specific computing tasks. One of the accelerators is optimized to encrypt data. The other modules focus on use cases such as AI and load balancing, or the process of evenly distributing computing tasks among servers to optimize performance.

“A first of its kind in any public cloud, C3 VMs will run workloads on 4th Gen Intel Xeon Scalable processors while they free up programmable packet processing to the IPUs securely at line rates of 200Gb/s,” said Nick McKeown, senior vice president and general manager of Intel’s network and edge group. “This Intel and Google collaboration enables customers through infrastructure that is more secure, flexible and performant.”

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU