Google Cloud launches new AMD-based instances for high-performance workloads
Google LLC’s cloud business on Thursday announced the general availability of the C2D instance series, which is based on Advanced Micro Devices Inc.’s newest Epyc server processors.
The C2D instances target performance-intensive workloads such as semiconductor design software and databases. The launch continues AMD’s recent momentum in the cloud market, which helped double its data center business’ revenue last quarter.
The new C2D series is part of a broader instance portfolio in Google Cloud that’s known as the Compute Optimized lineup. As the name suggests, the lineup is geared toward workloads that use a significant amount of processor capacity. The C2D series is more specialized, targeting workloads that require not only significant processor capacity but also large amounts of memory.
C2D instances can be configured with up to 112 virtual central processing units, 896 gigabytes of memory and 3 terabytes of flash storage. The storage is directly attached to the server running a customer’s instance. Directly attaching flash drives to a server improves performance because data takes less time to travel to and from the drives than if they were located at a different part of the network.
The C2D instance series is based on the third-generation Epyc server CPUs that AMD debuted last March. The CPUs use the Zen 3 core architecture, which is based on a seven-nanometer process and can carry out 19% more instructions per cycle than AMD’s previous-generation silicon.
Another key feature of AMD’s third-generation processors that contributes to the speed of Google’s new C2D instances is an up to 32-megabyte shared L3 cache. The L3 cache is a repository where a CPU’s cores keep the data they are processing for rapid access. The more cache capacity is available, the more data the processor can ingest at once.
Lynn Comp, corporate vice president of AMD’s cloud business, said that “the Google Cloud C2D instances with AMD EPYC processors show the continued growth of the AMD and Google Cloud collaboration, by now offering some of the highest performance instances for demanding, performance-intensive workloads.”
The C2D series comprises 21 instance configurations organized into three groups. There are seven standard instances, an equal number of machines featuring higher performance and another seven that offer increased memory.
The standard and performance-optimized C2D instances are geared toward workloads such as web servers. The memory-optimized instances, meanwhile, were designed for high-performance computing applications such as scientific simulations that are often deployed on supercomputers. Another supported use case: running electronic design automation software, which is used in chip design.
Google teamed up with AMD to compare the C2D series with its previous-generation N2D cloud instances, which also use AMD silicon. In one benchmark test, the companies’ engineers evaluated how well the C2D can run high-performance computing workloads. The instance series achieved 7% faster speeds than N2D when performing calculations with floating point numbers, the basic unit of data for many scientific applications.
The C2D also achieved 30% better results in the 30% STREAM Triad memory bandwidth benchmark. Memory bandwidth is a measure of how fast a processor can move data to and from memory. It directly influences application performance because programs can only start processing data after it has been retrieved from RAM.
Google said that the C2D series’ increased speed translates into “material” performance improvements across several use cases. Those use cases included weather forecasting, molecular dynamics and computational fluid dynamics.
Google is furthermore promising increased cost efficiency. Google found that one of the instances in the C2D series, the c2d-standard-112, is 6% more cost efficient than the comparable n2d-standard-128 machine from its previous-generation N2D instance series.
Over the last few years, Google has rolled out AMD processors across several instance lineups in its public cloud. Increased demand from Google and other hyperscale data center operators is one of the major contributors to AMD’s recent earnings momentum. Last quarter, the chipmaker’s data center revenue doubled year-over-year in an increase attributed to growing adoption of its Epyc server processors among cloud providers and enterprises.
Image: Google
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU