UPDATED 13:38 EDT / JUNE 09 2022

AI

Google Cloud expands its Vertex AI platform with new machine learning tools

Google LLC’s cloud business today debuted a series of enhancements to its Vertex AI platform that will enable enterprises to develop artificial intelligence software faster.

Introduced last year, Vertex AI is a collection of cloud services for creating AI models. Some of the services in the platform are geared toward tech-savvy companies that build fully custom neural networks from scratch. Other Vertex AI components are designed to help developers with limited machine learning expertise create AI software more easily.

The new features that Google Cloud introduced today for the platform made their debut at the company’s Applied ML Summit. The features span multiple areas including AI training, data management and neural network explainability.

Faster AI training 

The first major addition to Vertex AI is a capability called Reduction Server. Currently in preview, it promises to reduce the amount of time required to train neural networks. 

A neural network can’t immediately start generating insights after it’s developed, since it has to practice beforehand in a process known as AI training. Training a neural network can require a significant amount of time. To speed up the process, companies often train AI models using not a single server but an entire fleet of machines, which makes it possible to complete a large number of practice runs in parallel.

The machines that a company uses to train its AI software have to coordinate their work to ensure reliable processing. This coordination is usually facilitated using a type of specialized algorithm known as an all-reduce algorithm. 

The more efficient the all-reduce algorithm that a company uses, the better its AI training servers can coordinate their work. That in turn allows neural networks to be trained faster.

The new Reduction Server feature that Google is rolling out for Vertex AI is based on a custom all-reduce algorithm developed by the search giant. According to Google, the algorithm is more efficient than existing technologies. It reduces the amount of data that has to travel between AI training servers while processing is carried out, which frees up bandwidth, and also optimizes latency more effectively.

Google says Reduction Server has demonstrated impressive performance in internal benchmark tests. During an evaluation that involved the popular BERT neural network, training throughput increased by 75%. The search giant says that Reduction Server can also increase training throughput for other types of neural networks.

“This significantly reduces the training time required for large language workloads, like BERT, and further enables cost parity across different approaches,” explained Andrew Moore, vice president and general manager of Google Cloud’s Cloud AI & Industry Solutions unit. “In many mission-critical business scenarios, a shortened training cycle allows data scientists to train a model with higher predictive performance within the constraints of a deployment window.”

Streamlined AI development

Building AI software involves more than just training a neural network. Developers also have to collect the data that will be used to train the neural network, filter data errors and perform many other tasks. Machine learning teams create software workflows called pipelines to automate the different steps involved in AI development.

Google Cloud’s Vertex AI platform is receiving a collection of pre-packaged pipelines for building neural networks. The pipelines are available through a new tool called Vertex AI Tabular Workflows that is now in preview.

Vertex AI Tabular Workflows can be used to build neural networks that process tabular data, or data organized into rows and columns. A sizable portion of companies’ business information is stored in rows and columns.

Each of the pipelines available through Vertex AI Tabular Workflows focuses on easing a different set of tasks. The Feature Selection Pipeline, for example, makes it easier to manage features, the data points that a neural network uses to make decisions. Some of the pipelines include new algorithms developed by Google Research.

“Tabular workflows are fully managed by the Vertex AI team, so users don’t need to worry about updates, dependencies and conflicts,” Google Cloud product manager Alex Martin detailed in a blog post. “They easily scale to large datasets, so teams don’t need to re-engineer infrastructure as workloads grow. Each workflow is paired with an optimal hardware configuration for best performance.”

Explainability and integrations

Google also introduced a number of other features for its Vertex AI platform at the Applied ML Summit today. A new capability called Example-based Explanations will make it easier to troubleshoot a neural network that is generating inaccurate results and identify the root cause. Additionally, Google is rolling out new integrations with Neo4j and Labelbox.

Neo4j is a popular graph database developed by the startup of the same name. Graph databases are optimized to store not only records such as sales logs but also information on how those records are connected to one another. Using the new integration that Google announced today, Vertex AI users will gain the ability to more easily work with data stored in Neo4j.

Labelbox Inc., with which Google also announced a partnership today, is a San Francisco-based startup backed by more than $188 million in funding. It provides tools that simplify the process of creating AI training datasets. Google is rolling out an integration that enable Vertex AI customers to more easily use Labelbox’s tools to prepare training data for their machine learning projects. 

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU