UPDATED 14:55 EDT / MAY 21 2018

CLOUD

Google launches new features for running AI and software containers on its cloud

Google Inc. today rolled new features for its cloud platform that address two of the most important technology trends in the enterprise: software containers and artificial intelligence.

Just one of the enhancements focuses on AI, but it’s arguably among the most significant in the bunch. Companies that use the Google Cloud ML Engine to build machine learning models can now harness the search giant’s internally designed Tensor Processing Units in their projects.

A TPU is made up of four application-specific integrated circuits specifically tuned for running AI software. The chip series, which powers several of Google’s consumer services, became available via its cloud platform in February.

In today’s announcement, the company cited a recent study from AI startup RiseML GmbH that compared its TPUs with Nvidia Corp.’s Volta V100 graphics cards. The study found Google’s chips can perform certain AI tasks for just over half the cost of a comparable V100 deployment on Amazon Web Services. That savings has the potential to add up in a big way for large enterprises, although it’s worth noting that at least one analyst, Karl Freund of Moor Insights & Strategy, has taken issue with TPUs’ claimed cost efficiency.

The other features introduced today are rolling out for Google’s Kubernetes Engine. The service provides a cloud environment for running software containers, an increasingly popular means of deploying applications. Enterprises are adopting the technology because it lets their developers package code into lightweight, portable packages that can simplify operations.

Kubernetes Engine 1.10 brings with it several features geared towards large organizations. The first is an “auto-repair” mechanism capable of automatically fixing certain technical problems. As an added precaution against outages, companies can also make use of the newly added Regional Persistent Disks and Regional Clusters. These configuration options let administrators distribute a deployment across multiple data centers to reduce the risk of downtime.

Rounding out the list of new features is support for Shared Virtual Private Clouds. This capability provides the ability to split up a large environment into smaller, more manageable sections and assign each to a different administrator.

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU