UPDATED 16:00 EDT / JULY 20 2018

EMERGING TECH

Decoding the black box of AI: lessons in accountability

It is one thing to imbue machines with intelligence. It’s another to keep these powerful computing systems honest. Mankind’s moment could be tainted if artificial intelligence systems aren’t kept accountable, and right now AI can feel like a black box. Even AI’s creators do not fully understand how these intelligent systems reach decisions.

“Ultimately, we want to ensure that we’re building models responsibly so that the models are in line with our mission as business and they also don’t do any unintended harm, said Ilana Golbin (pictured), manager of the artificial intelligence accelerator at PricewaterhouseCoopers LLP. “Because of that we need to build explainability into our models and really understand what they’re doing.”

Golbin spoke with Rebecca Knight and Peter Burris, co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the MIT CDOIQ Symposium in Cambridge, Massachusetts. They discussed artificial intelligence, governance of AI and how to gain the trust of customers.

Transparency and governance with AI

Simply adding a disclaimer or hoping your customers stay up to date with the latest technology news is not going to cut it, according to Golbin. In order for companies utilizing AI to gain their customers’ trust, businesses need to develop a strategy. Enterprises need to explain how the AI comes to the decisions it does, and they need to work to disseminate that information to the end user. For that, a strategy is imperative.

There are two areas businesses should consider given their use case for the AI, according to Golbin. The first one is “criticality,” meaning to what extent could someone be damaged by this technology. The second is vulnerability, meaning how willing a customer is to use the technology and accept the decisions that it makes. Businesses also need to establish a chain of responsibility for the decisions that the AI makes.

“One of the reasons why having a central AI strategy is really important is that you can also define a central controls framework — some type of centralized assurance, an auditing process that’s mandated from a high level of the organization that everybody will follow,” Golbin said. “That’s the best way to get AI widely adopted.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the MIT CDOIQ Symposium:

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU