UPDATED 21:10 EDT / NOVEMBER 21 2019

explainable-ai AI

Google’s Explainable AI service sheds light on how machine learning models make decisions

Google LLC has introduced a new “Explainable AI” service to its cloud platform aimed at making the process by which machine learning models come to their decisions more transparent.

The idea is that this will help build greater trust in those models, Google said. That’s important because most existing models tend to be rather opaque. It’s just not clear how they reach their decisions.

Tracy Frey, director of strategy for Google Cloud AI, explained in a blog post today that Explainable AI is intended to improve the interpretability of machine learning models. She said the new service works by quantifying each data factor’s contribution to the outcome a model comes up with, helping users understand why it makes the decisions it does.

In other words, it won’t be explaining things in layman’s terms, but the analysis should still be useful for data scientists and developers who build the machine learning models in the first place.

Explainable AI has further limitations, as any interpretations it comes up with will depend on the nature of the machine learning model and the data used to train it.

“Any explanation method has limitations,” she wrote. “For one, AI Explanations reflect the patterns the model found in the data, but they don’t reveal any fundamental relationships in your data sample, population, or application. We’re striving to make the most straightforward, useful explanation methods available to our customers, while being transparent about its limitations.”

Nonetheless, Explainable AI could be important because accurate explanations of why a particular machine learning model reaches the conclusions it does would be useful for senior executives within an organization, who are ultimately responsible for those decisions. That’s especially true in the case of highly regulated industries where confidence is absolutely critical. For many organizations in that position, Google said, AI without any kind of interpretability is currently out of bounds.

In related news, Google also released what it calls “model cards,” which serve as documentation for the Face Detection and Object Detection features of its Cloud Vision application programming interface.

The model cards detail the performance characteristics of those pre-trained machine learning models, and provide practical information about their performance and limitations. Google said the intention is to help developers make more informed decisions about which models to use and how to deploy them responsibly.

Image: Google

Since you’re here …

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission:    >>>>>>  SUBSCRIBE NOW >>>>>>  to our YouTube channel.

… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.