Google debuts new AI features and tools to advance MLOps
Google LLC announced today a raft of artificial intelligence-related updates as part of Google Cloud Next: OnAir, a nine-week series of livestream events that runs through Sept. 15.
The focus of today’s updates is all about machine learning and, in particular, the emerging MLOps discipline that’s aimed at putting machine learning workflows into operation by fostering more collaboration and better communication between data scientists and developers.
Google’s AI Platform is a suite of tools that’s meant to enable MLOps. It enables machine learning developers, data scientists and data engineers to take their ideas around ML and develop these into actual projects that can be deployed in production quickly and without excessive costs.
The AI Platform is centered on Kubeflow, which is an open-source platform developed by Google that enables developers to build portable ML pipelines that can run on-premises or in Google’s Cloud. It also provides access to Google’s machine learning framework TensorFlow, its BigQuery data store and its cloud-based Tensor Processing Units.
In a blog post, Craig Wiley, director of product management at Google Cloud AI, announced a host of new features for the AI Platform that are meant to simplify MLOps. They include a new, fully managed service for building ML pipelines that will be made available in preview in October. The new managed service enables customers to build ML pipelines using TensorFlow Extended‘s pre-built components and templates, significantly reducing the effort required to deploy new models.
Google is also adding to its Continuous Evaluation service that samples prediction input and output from deployed ML models and analyzes their performance. Continuous Monitoring, which will be available by the end of the year, will monitor ML model performance in production to warn users if a particular model is going stale, or if there are any outliers, skews or concept drifts that need to be fixed.
Wiley also introduced a new ML Metadata Management service for AI Platform that will enter preview in September. It can be used to track important artifacts and experiments and provide a curated ledger of actions and detailed model lineage. “This will enable customers to determine model provenance for any model trained on AI Platform for debugging, audit, or collaboration,” he said.
AI Platform will also get a new Feature Store by the end of the year that will server as a centralized, organization-wide repository of historical and new feature values that can be reused as desired by ML teams. “This will boost productivity of users by eliminating redundant steps in feature engineering,” Wiley said.
More updates to AI Platform were introduced by way of Andrew Moore, head of Google Cloud AI & Industry Solutions. He revealed that Vizier, a new service that autotunes the hyperparameters of ML models to get the best output, is now available in beta. In addition, he said AI Platform’s Notebooks service, which provides an integrated and secure JupyterLab environment for data scientists and developers to experiment, develop, and deploy ML models into production, is now generally available.
Other new features coming to AI Platform include an update to Cloud AI Building Blocks, which provides access to commonly used models around AI-based vision, translation and speech via an application programming interface. That service will add AutoML as an integrated function in the workflow, which means more no-code and code-based options for building custom ML models faster, Moore said.
Moving on, Moore turned his attention to Google’s Contact Center AI platform, which is a suite of services that uses AI to automate enterprise contact center operations.
The main update here is a new version of Google’s virtual agent, called Dialogflow CX, designed for companies with large contact center operations. The new agent can support complex, multiturn conversations and is “truly omnichannel” in that it can be built just once and deployed anywhere, Moore said.
Also new to Contact Center AI is Agent Assist for Chat, which is a new module that provides continues support to human agents by identifying the customer’s intent and providing real-time, step-by-step assistance. Meanwhile, Custom Voice, currently available in beta, is a new capability for the Text-to-Speech API that enables companies to create a “unique voice to represent your brand across all your customer touchpoints,” instead of a common voice used by multiple organizations.
“By taking advantage of the custom Text-to-Speech model created with Custom Voice, you can define and choose the voice profile that suits your business and adjust to changes without scheduling studio time with voice actors to record new phrases,” Moore said.
Google also squeezed in some specific new AI services, including Lending Document AI, which is designed for the mortgage industry and helps speed up loan applications by automatically processing borrowers’ income and asset documents. Meanwhile, Procure-to-Pay Document AI is a new service that companies can use to automate procurement cycles, while the new Media Translation API provides real-time speech translation from audio data.
Lending Document AI is now available in Alpha, while Procure-to-pay Document AI and the Media Translation API Moore are in beta.
Since you’re here …
Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!
Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.
… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.