UPDATED 13:15 EDT / DECEMBER 16 2021

CLOUD

Inside AWS’ decision to make its own custom chips

Having been in existence for the past 15 years, Amazon Elastic Compute Cloud or EC2 houses more than 475 instances and continues to be one of Amazon Web Services Inc.’s most potent weapons.

EC2 is eyeing 500 instances by the end of the year, and its key drivers entail optimizing price performance and supporting new workloads. With the AWS Graviton3 central processing unit available in the C7g instance, it’s the latest entrant in the AWS Graviton processor family and seeks to be the stepping stone to enhanced performance in different areas such as multithreaded applications, according to David Brown (pictured), vice president for EC2 at AWS.

“We’ve had some customers report up to 80% performance improvements from Graviton2 to Graviton3 when the application was more of a single-threaded application,” he explained. “With Graviton3, we’ve seen a significant performance boost for video encoding and cryptographic algorithms, which really impacts even the most modern applications.”

Brown spoke with John Furrier and Dave Vellante, co-hosts of theCUBE, SiliconANGLE Media’s livestreaming studio, during AWS re:Invent. They discussed EC2, the evolution of Graviton and AWS’ network coverage. (* Disclosure below.)

Graviton is at an inflection point

For new discoveries to be attained, experiments come in handy when tuning the engine and sparking the ecosystem. This was the case with the AWS Nitro chips, because it necessitated the creation of a general-purpose server chip, according to Brown.

“In 2018, we launched the A1 instance, which was our Graviton1 instance,” he pointed out. “What we didn’t tell people at the time is that it was actually the same chip we were using on our network card. So, essentially, it was a network card that we were giving to you as a server.”

The creation of Graviton triggered growth in the ecosystem because customers started to see improvements immediately. Moreover, Graviton uses high-frequency CPUs to run workloads, according to Brown.

 “We knew that a year later, Graviton2 was going to come out. Graviton2 was just an amazing chip,” he added. “It continues to see incredible adoption, 40% price-performance improvement over other instances.”

With that 40% price-performance improvement, Brown believes, this was the turning point in the Graviton processor family, because this was the rate across various workloads.

“We actually just had SAP, who obviously is an enormous enterprise supporting enterprises all over the world, announced that they are going to be moving the S/4HANA Cloud to run on Graviton2,” he stated. “We’ve seen enterprises of that scale and game developers, every single vertical looking to move to Graviton2 and get that 40% price-performance [boost].”

When it comes to machine learning and cryptographic workloads, Graviton3 offers three times and twice the performance of Graviton2, respectively.

Migration across the Graviton hierarchies is easy, according to Brown.

“It’s relatively straightforward … we just ran the Graviton2 four-day challenge, and we did that because we actually had an enterprise migrate one of the largest production applications in just four days,” he stated.

With AWS using the cloud to do all the electronic design automation, Brown believes that the time constraint is easily addressed because testing is done ahead of time, enabling accurate tape-out, the final stage of the design process before it goes to manufacturing.

“We’re making sure that the Annapurna [Labs] team that’s building those CPUs is deeply engaged with my team … so when that chip arrives from tape-out, I’m not waiting nine months or two years, like would normally be the case,” he explained. “I actually had an instance up and running within a week or two on somebody’s desk studying to do the integration.”

Being able to scale is advantageous, because it boosts quality and drives down annual failure rates, according to Brown.

“The other place that scale is really help is in capacity,” he said. “Being able to make sure that we can absorb things like the COVID spike or the stuff you see in the financial industry with just enormous demand for compute, we can do that because of our scale.”

Ease of use and elimination of complexities are some of the factors making AWS Cloud WAN one of the largest networks, according to Brown.

“Customers are starting to use that for their branch office communication,” he said. “So instead of going and provisioning their own international MPLS networks and that sort of thing, they say, ‘Let me onboard to AWS with VPN or Direct Connect.’”

Here’s the complete video interview, part of SiliconANGLE and theCUBE’s coverage of AWS re:Invent. (* Disclosure: AWS sponsored this segment of theCUBE. Neither AWS nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU