UPDATED 17:30 EDT / DECEMBER 17 2021

THOUGHT LEADERSHIP

Fiddler Labs CEO Krishna Gade revamps AI models for trust and transparency

We recently spoke with the chief executives of companies that participated in the recent AWS Startup Showcase: The Next Big Things in AI, Security & Life Sciences to find out what drives them and learn about their visions for the future. This feature is part of theCUBE’s ongoing CEO Startup Spotlight series.

Artificial intelligence is everywhere, but just as powerful as its benefits are the challenges it brings with data quality, accuracy, privacy, protection and bias. Many large enterprises, such as Facebook, Twitter and Google, have recently been in the spotlight for unfair decisions resulting from their sophisticated but biased algorithms.

Also, a recent Gartner Inc. report declared responsible AI as one of four trends that should govern innovation in the near term because of the need to address these problems. Concerns around AI and its models have drawn the attention of governments around the world, which are planning regulations to ensure fairness and accountability.

Krishna Gade, founder and chief executive officer of startup Fiddler Labs Inc., stands out in this debate, since he runs a company created to solve the “AI black box” challenge: understanding the inner workings of smart models for repairing existing issues and avoiding others. At the same time, he has taken part in advisory groups that work with government officials to shape future AI regulations.

“Maybe two, three years ago people didn’t know what responsible AI or ethical AI meant or what it means to create transparency in AI by holding companies accountable for it,” Gade said. “But now there is a lot more awareness that is spreading; there are documentaries that got made about coded bias or the social dilemma and also many documentaries that have shown some of the negative effects of AI systems and products that did not work very well.”

The main complication is that as AI becomes increasingly used, with an annual growth prospect of 40% from 2021 to 2028, it is hard for businesses to keep track of what is inside their myriad models and whether they are performing well. Unlike ordinary software code, you can’t actually open AI models, read their patterns and understand what they look like, leaving room for trouble, according to Gade.

Fiddler’s platform for model explainability, modern monitoring and bias detection sits at the heart of the machine learning workflow and keeps track of all parts of the models to give the enterprise a centralized way to manage all this information in one place and build the next generation of AI. It has a compliance and operational monitoring tool that tracks what is happening with models in production.

A first-hand understanding of the pain points

Although Fiddler is only three years old, Krishna has been in the tech industry and around data and models for over a decade, working as an engineer for search engines and recommendation platforms Bing, Twitter, Pinterest and Facebook. His main task has always been to deal with large data sets and try to extract intelligence and insights to help build consumer-friendly products.

Prior to founding Fiddler, Gade was an engineering manager working on the Facebook News Feed for nearly two years. Gade got a first-hand look at how difficult it can be for enterprises to understand their own machine learning models.

“As those systems became more and more mature, and more and more complex, it was very hard to understand how they work,” he said. “For example, you got questions like ‘Why am I seeing this story in my feed? Why is this news story going viral? Is this actually real news or fake news?’ All of these questions are very difficult to answer.”

As Gade and his team built a platform to help answer these questions and increase transparency in Facebook’s models, he mapped out the problem he would later solve with Fiddler.

“I felt like until that point there were a lot of AI solutions around helping developers build AI as fast as possible and as accurately as possible,” he said. “But nothing was available around what happens when you deploy your AI into production, how much can you trust it, how can you get visibility into it, whether it is about how it’s affecting your business or whether it’s creating any potential bias and making decisions that might be affecting your users?”

Allowing algorithms to make the right decisions

More than making AI models explainable, Gade’s mission is to make them trustworthy. A book lover, he explained the importance of trust by citing “Sapiens” by Yuval Noah Harari.

“It talks about two things that separate humans from other animals: No. 1 is the ability to deal with ambiguity, and number two is the ability to scale trust in millions and millions of numbers,” he said. “If you put in more than 100 gorillas, there’s likely to be a fight; they cannot trust each other, whereas 100,000 humans can go to a football stadium and watch a soccer game … and, without having any issues, millions of people can elect the president.”

This trust must be extended to the machines that increasingly enter our lives and take over decision-making. The idea is that people can trust the algorithms that will decide whether they can take out a loan, make a purchase, or get an accurate medical diagnosis, for example.

“We have this inherent need to develop trust in the machine. How is this machine working? Why is it making such decisions?” Gade explained. “And until we close this trust gap, it will be a very, very difficult time for humans to rely on machines.”

The numerous recent cases of companies accused of unfair decisions arising from their sophisticated algorithms give a hint of the magnitude of the problem. Facebook removed face recognition from its products, and Twitter largely abandoned an image-cropping algorithm because the systems were likely to be biased. Having difficulty managing risk, Zillow decided to close Zillow Offers, which was responsible for the majority of the company’s revenue, due to a problem with the property price estimating algorithms.

Even with enough money and technical teams, these big tech companies could not solve the problem to the satisfaction of customers or authorities. What will it be like for businesses outside the technology field that may not have the technical talent, time or resources to work on the problem?

“We are basically trying to build a generic platform so that we can work to help banks… insurance companies, healthcare companies, manufacturing companies, recruiting companies, and many others not facing issues,” Gade said. “Definitely, our experience helps to really have a solid grounding, but what also sets us apart is that we did understand the pain points much earlier than maybe other companies.”

Everything points to accelerated growth

Like other startup leaders, Gade found a business opportunity and grabbed it. And everything is very much in his favor. Concerns about the potential misuse or unintended consequences of AI have spurred government efforts to examine and develop standards and policies.

Following its 2020 AI white paper, titled “On Artificial Intelligence — A European Approach to Excellence and Trust,” the European Commission launched a package of proposing rules and actions, with a focus on trust and transparency, aimed at transforming Europe into the global hub for trustworthy AI.

Meanwhile, in the U.S., things are moving forward too. Many agencies study responsible AI policies, notably the Federal Trade Commission, the Food and Drug Administration, and financial services authorities. As someone who knows the deeper issues surrounding AI, Gade makes his contributions. He has been to Washington, D.C., to meet with regulators and work with the nonprofit innovation center FinRegLab Inc. and other experts to develop research on how to use tools like Fiddler to create responsible AI in finance and to manage AI risks.

“AI regulations will become more and more common, and I think they should be,” Gade said. “Some people are still not fully aware of it or they want to wait and watch. They don’t want to worry until it becomes a problem for them, and this is where regulation becomes really important.”

The future of Fiddler

Encouraged by the opportunities to come, Gade is optimistic about Fiddler’s growth. He hopes to triple the company over the next year, from customer base to revenue and team. 

“We are now a 52-person company,” he said. “We have grown from three people in 2018 to 52 people in 2021, and this hypergrowth is likely to continue. This is definitely the largest team that I have ever managed.”

That’s not bad for a first-time entrepreneur who, not so long ago, didn’t even imagine being in Silicon Valley.

“During my undergraduate studies in India, I used to admire technology leaders like the founder of Hotmail, which I read a book about, but founding a company wasn’t something I even had any aspirations for,” Gade said. “It’s almost like a dream.”

Photo: Krishna Gade

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU