AWS launches Deep Learning Containers and new AI infrastructure options
Fresh off landing a major infrastructure deal with Volkswagen AG, Amazon Web Services Inc. today introduced new features that will give enterprises more flexibility in how they use its cloud platform.
The main highlight is a software bundle dubbed AWS Deep Learning Containers. It consists of popular artificial intelligence tools from the open-source ecosystem that the provider has packaged into Docker containers, allowing them to be easily deployed on different types of AWS compute instances. They’re intended to let engineers set up a cloud-based AI development environment in as little as a few minutes.
The Deep Learning Containers also pack a number of optimizations that improve AI performance. AWS’ prepackaged edition of the popular TensorFlow deep learning framework, for instance, can train neural networks up to twice as fast as the vanilla version. This speed boost is provided by customizations that allow the software to more efficiently distribute work across graphics cards in the provider’s cloud platform.
TensorFlow is one of only two AI tools available as a Deep Learning Container on launch, with the other being Apache MXNet. Amazon said it will add more frameworks to the roster over time.
Matt Wood (pictured), general manager of artificial intelligence at AWS, said the Deep Learning Containers are intended to help companies quickly set up deep learning environments with optimized container images. With these and other services, he said, Amazon aims to make AI a lot easier to use. “We want to make machine learning boring,” he said.
AWS unveiled the bundle in conjunction with a new automation tool for its Redshift data warehouse that likewise aims to reduce administrative overhead for customers. Dubbed Concurrency Scaling, the mechanism can allocate additional processing power when there’s a usage spike and deprovision the extra resources once they’re longer needed. AWS also took the opportunity to officially release its App Mesh network monitoring tool into general availability.
The new features are rolling out alongside a trio of infrastructure options aimed mainly at enterprises that are looking to cut their cloud expenses. The first, Glacier Deep Archive, is a new tier in the provider’s S3 object storage service that’s designed for safekeeping infrequently accessed data such financial audit logs. It’s up to 75 percent cheaper than the existing S3 Glacier Archive tier that AWS has offered for such use cases.
The provider is also launching new versions of its M5a and R5a compute instance families. Introduced in November, these instances use Advanced Micro Devices Inc. chips that make them 10 percent cheaper than comparable Intel Corp. Xeon-based AWS machines.
Today’s update provides the ability to provision M5a and R5a nodes with 75 gigabytes’ to 3.6 terabytes’ worth of direct-attached NVMe flash drives. That offers faster access times than regular storage because they’re in close physical proximity the underlying servers.
With reporting from Robert Hof
Photo: Robert Hof/SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU