UPDATED 01:54 EDT / NOVEMBER 27 2018

CLOUD

AWS beefs up its cloud with its own processor, and much more

Kicking off its annual re:Invent cloud conference with a bang Monday night, Amazon Web Services Inc. debuted among a wide array of new services a new cloud chip of its own design.

Dubbed Graviton and available to its cloud customers through AWS’ EC2 cloud compute service, the Arm-based chip was designed by the chip developer Annapurna Labs that Amazon bought in 2015.

The key appeal of the chips will be that they offer cheaper computing. Amazon said they can run applications at up to 45 percent less cost than Intel Corp.’s or Advanced Micro Devices Inc.’s chips, which AWS also offers for rent.

Peter DeSantis (pictured), vice president of AWS Global Infrastructure and Customer Support, said during an evening presentation that the chip service, itself called A1, is intended for so-called scale-out applications that can run across many machines, a quality of many web applications. More specifically, they’re good for containerized microservices, or bits of applications bundled up so they can run on many kinds of computers and software, as well as web servers, development environments and fleets of data caching servers.

The Graviton chip is a sign of the ascendance of Arm chips, which traditionally have been used mostly in lower-power devices such as smartphones and more recently in the likes of network routers. Now, they’re starting to find their way into mainstream data center servers, and the A1 compute instance using the Graviton chip means it’s now in the cloud.

At the same time, it’s a signal that Intel and AMD no longer have the data center or the cloud to themselves, though Intel still dominates and AMD has been making gains with its Epyc chips. Graviton can run on applications written on Amazon Linux, Red Hat Enterprise Linux and Ubuntu, and the A1 instances are available in three U.S. and one European AWS region.

“This was one of the big breaks Arm needed to increase its credibility in the server ecosystem,” said Patrick Moorhead, president and principal analyst at Moor Insights & Strategy. “Arm is already prevalent in networking and storage, but not in general purpose compute. AWS was very clear that its strategy is to have the widest array of compute options, including Intel, AMD and Arm, which could very well drive a ripple effect through the industry.”

Graviton wasn’t the only announcement of beefed-up capabilities at AWS, still the dominant provider of cloud infrastructure services. The company also announced other new compute instances, including one geekily called P3dn that offers the power of graphics processing units for running machine learning, artificial intelligence and high-performance computing applications. It uses Nvidia Corp.’s high-end V100 Volta chip to provide what Matt Wood, AWS’ general manager of AI, the largest, fastest training instances in the cloud.

Yet another new instance, called C5N, offers higher network bandwidth for running compute-intensive applications. That one actually got the most cheers of the evening from the large audience of developers.

Also debuting Monday night was a new tool called Global Accelerator designed to help AWS customers route their network traffic more easily across multiple cloud regions. That’s important because many customers need to run in more than one zone to improve speed, make sure applications don’t go down if one region has an issue, or because of regulatory requirements.

AWS plans to charge customers based on the number of accelerators they create. “An accelerator is the resource you create to direct traffic to optimal endpoints over the AWS global network,” Shaun Ray, an AWS senior manager of developer operations, wrote in a blog post. “Customers will typically set up one accelerator for each application, but more complex applications may require more than one accelerator.”

Global Accelerator, which customers will pay for based on how many they create, is now available in several regions in the U.S., Europe and Asia.

Also on the networking front, AWS announced Elastic Fabric Adapter, which is a network adapter for Amazon EC2 instances that AWS claims will deliver the performance of on-premises high-performance computing clusters.

Not least, AWS debuted a lightweight virtualization service for “serverless” computing called Firecracker, which aims to allow customers the ability to use compute services without having specifically to provision servers and networking.

Photo: Robert Hof/SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU