UPDATED 14:55 EST / MAY 21 2018

google CLOUD

Google launches new features for running AI and software containers on its cloud

Google Inc. today rolled new features for its cloud platform that address two of the most important technology trends in the enterprise: software containers and artificial intelligence.

Just one of the enhancements focuses on AI, but it’s arguably among the most significant in the bunch. Companies that use the Google Cloud ML Engine to build machine learning models can now harness the search giant’s internally designed Tensor Processing Units in their projects.

A TPU is made up of four application-specific integrated circuits specifically tuned for running AI software. The chip series, which powers several of Google’s consumer services, became available via its cloud platform in February.

In today’s announcement, the company cited a recent study from AI startup RiseML GmbH that compared its TPUs with Nvidia Corp.’s Volta V100 graphics cards. The study found Google’s chips can perform certain AI tasks for just over half the cost of a comparable V100 deployment on Amazon Web Services. That savings has the potential to add up in a big way for large enterprises, although it’s worth noting that at least one analyst, Karl Freund of Moor Insights & Strategy, has taken issue with TPUs’ claimed cost efficiency.

The other features introduced today are rolling out for Google’s Kubernetes Engine. The service provides a cloud environment for running software containers, an increasingly popular means of deploying applications. Enterprises are adopting the technology because it lets their developers package code into lightweight, portable packages that can simplify operations.

Kubernetes Engine 1.10 brings with it several features geared towards large organizations. The first is an “auto-repair” mechanism capable of automatically fixing certain technical problems. As an added precaution against outages, companies can also make use of the newly added Regional Persistent Disks and Regional Clusters. These configuration options let administrators distribute a deployment across multiple data centers to reduce the risk of downtime.

Rounding out the list of new features is support for Shared Virtual Private Clouds. This capability provides the ability to split up a large environment into smaller, more manageable sections and assign each to a different administrator.

Image: Google

Since you’re here …

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission:    >>>>>>  SUBSCRIBE NOW >>>>>>  to our YouTube channel.

… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.