UPDATED 19:24 EDT / MAY 20 2021

AI

Fiddler Labs aims to remove artificial intelligence bias inequities through ‘explainability in AI’

Understanding how an artificial intelligence system generates its predictions is crucial for avoidance of bias, says a startup that has been developing what it calls “explainability in AI.”

A person’s application for a loan, for example, being rejected or approved by AI needs to be analyzed for fairness.

“You can actually create a dystopian world where some people get really great decisions from your systems and where some people are left out,” said Krishna Gade (pictured), co-founder and chief executive officer of Fiddler Labs Inc.

Modeling data can be old or sourced incorrectly. That, among other things, causes problems. Gade believes that the black-box nature of an AI or machine learning engine, where one can’t simply open up the code and read it like you can traditional software, needs special tools.

In anticipation of the AWS Startup Showcase: The Next Big Thing in AI, Security & Life Sciences event — set to kick off on June 16 — John Walls, host of theCUBE, SiliconANGLE Media’s livestreaming studio, spoke with Gade for a special CUBE Conversation on how Fiddler intends to remove inequities from AI. (* Disclosure below.)

‘A dystopian world’

The main issue is that the AI engine is constantly changing based on what it’s learned. It’s designed to do that. But, because of it, AI is a stochastic or probabilistic system. In other words, there’s some randomness in it.

“Its performance, its predictions can change over time based on the data it is receiving,” Gade said.

Therefore, probing, interrogating and figuring out how predictions are being made has to take place in order to counter that non-deterministic nature. It’s unlike what happens in traditional software, where when you start from the same point, you produce the same output each time — AI doesn’t do that.

“When do you trust your predictions? How do you know if the model is actually performing in the same manner that you trained it?” Gade asked. You’ve got to continuously monitor the algorithm, he explained. That way you know if accuracy is up or down and, importantly, making bad predictions — devastating, in many cases for individuals, such as folks whose resumes are being screened for a job, for example, who can be rejected because of poor algorithms.

Retraining of the AI can then take place, as necessary, before the unfairness kicks in. Not only is that good for the end user, but brand reputation is protected, along with regulatory compliance. Gade’s company, which has a partnered with AWS, does this. Amazon is looking to create an ecosystem of responsible AI technologies, and Amazon has invested in the Fiddler AI business.

“Algorithms behind the scenes are processing our requests and delivering the experiences that we have. Now, increasingly, these algorithms are becoming AI-based algorithms,” Gade explained. So, different cultures, ethnicities and genders all need protecting from bias that could creep in creating great suffering, he added.

Data drifts

It’s clear that AI systems deteriorate in performance over time, Gade pointed out: A recommendation system on a website would be a good illustration — behavior, pre-COVID and post-COVID, for example. has been completely different. Gade cited as an example the infamous COVID run on toilet paper when the lockdown first hit. That overnight prepping behavior change wouldn’t been spotted by an AI system. Thus, training AI systems, such as a retail inventory system, on old data doesn’t work right. This is called data drift.

“The amount of stuff that people are buying in terms of toilet paper has completely shifted. And so their model may not be predicting as accurately as it would,” Gade said.

Instacart was one grocery delivery company that admitted its prediction models were off 65 to 90% during data shifts in the COVID period, according to Gade.

“You can catch these things earlier, and then, you know, save your business from losing,” he added, because bad recommendations adversely affect sales.

Ethical considerations of AI-driven algorithms

Data has to be collected from the right sources, too. Gade uses the 2019 Apple and Goldman Sachs credit card debacle as proof.

“In the same household, the husband and wife were getting 10 times difference credit limit between a male and a female,” he stated, adding that they probably had similar salary ranges and credit scores. “Your customers will complain about it; you would lose your brand reputation.”

Testing algorithms for bias before applying them is the answer, according to Gade. ‘These are ethical practices. These are the responsible ways of building your AI,” he said.

AI in the criminal justice system and in clinical diagnosis has even more ethical considerations: “Instead of actually adding value to the customers [with AI], you may be actually hurting them,” he concluded.

Watch the complete video interview below, be sure to check out more of SiliconANGLE’s and theCUBE’s CUBE Conversations, and tune in to theCUBE’s live coverage of the AWS Startup Showcase: The Next Big Thing in AI, Security & Life Sciences event on June 16. (*Disclosure: Fiddler Labs Inc. sponsored this CUBE Conversation. Neither Fiddler Labs nor other sponsors have editorial control over the content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU