UPDATED 13:25 EDT / JUNE 18 2021

AI

Insight Partners and Amazon back $32M round for explainable AI startup Fiddler Labs

Fiddler Labs Inc., a startup helping enterprises understand the inner workings of their artificial intelligence models, has raised a $32 million funding round that saw the participation of Insight Partners, Amazon.com Inc.’s Alexa Fund and other institutional investors.

The Palo Alto, California-based startup announced the investment on Thursday. Fiddler says that it has raised $47 million to date.

Organizations are applying AI to a growing number of tasks ranging from automatic invoice processing to piloting self-driving cars. As a result, they need to verify that the decisions made by their machine learning models are correct and, if they’re not, identify how to fix the cause of the errors. But that’s easier said than done because machine learning models are often so complex it’s impossible to understand exactly how a model reached a certain result.

Fiddler Labs has developed a software platform that promises to give organizations visibility into their AI software. The platform uses complex mathematical methods to untangle the complexity of machine learning models and reveal why they made a given decision.

The technical challenges that normally make it difficult to understand an AI’s inner workings stem from the way neural networks are architected. A neural network processes data by breaking it into smaller pieces and processing the pieces using a large number of relatively simple algorithms known as artificial neurons. The largest neural networks have more than 100 billion artificial neurons, which makes it impractical to map out each step of how an AI turns raw data into results. 

What makes the task even more challenging is that AI models change their configuration over time. A neural network tasked with, for example, organizing business documents by category, learns ways to organize documents faster and more accurately as it gains experience.

Neural networks apply the lessons they learn by making modifications to their artificial neurons. In other words, deciphering how an AI made a given decision requires understanding upwards of billions of artificial neurons that not only perform a larger number of operations but also change how they perform those operations over time.

Fiddler Labs’ AI explainability platform tackles the challenges using a combination of techniques developed by its own experts and the broader research community. The platform generates visual reports that show developers what led AI to draw a particular conclusion, including the factors it took into account to produce the result.

“Fiddler helps them create these reports, keep all of these reports in one place, and then once the model is deployed, it basically can help them monitor these models continuously,” explained Fiddler Chief Executive Officer Krishna Gade (pictured, left) in a recent interview on SiliconANGLE Media’s theCUBE Studio. He was joined by Fiddler Labs co-founder and Chief Product Officer Amit Paka. 

One of the methods with which Fiddler Labs produces the reports is known as integrated gradients. With the integrated gradients method, developers give an AI a piece of data to process, examine the processing results and then provide the algorithm with a slightly modified version of the same data. For example, developers might task a translation AI with translating an article that includes a title and then have it process the same article but without the title.

Through this process, it’s possible to determine what factors a neural network considers when making decisions. If the AI produces a better translation for the version of the article that includes a title, the developers can infer that an article’s title is one of the details the AI analyzes to help it understand the meaning of text.

Another method Fiddler uses to explain AI results is based on concepts from the field of game theory. In certain games that involve multiple players, it’s possible to mathematically describe the role of each player. Fiddler has carried over the concept to machine learning: It treats the factors that an AI considers to make decisions as the “players” and then calculates how each factor influences an AI’s processing results.

Fiddler has paired its core explainability features with other capabilities that promise to make deploying machine learning models in the enterprise easier. The startup’s platform can help developers identify when an AI’s accuracy drops for one reason or another. It also catches incidents where the algorithm is given erroneous data to process. Additionally, Fiddler says, the platform makes it easier to comply with regulations that require companies to prove the accuracy of results generated by their AI models.

“After launching Fiddler’s Explainable AI Platform, we’ve expanded to encompass every stage of the AI lifecycle, from development, to validation, to production,” Gade said in a statement. “With the latest funding, we will bring our Model Performance Management solution to even more teams, explore new solutions to fight issues like bias and data drift, and continue driving customer education around Responsible AI.”

Fiddler says that its customers include one of the largest financial institutions in the U.S. and a “leading” e-commerce platform.

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU