UPDATED 13:13 EDT / JUNE 29 2020

AI

IBM donates machine learning tooling to enable ‘responsible’ AI

IBM Corp. is donating three open-source artificial intelligence development toolkits to LF AI, an organization within the Linux Foundation that maintains open-source machine learning tools.

The LF AI Technical Advisory Committee formally approved the move earlier this month. IBM is currently in the process of transferring the projects to the organization.

The three toolkits each serve a different purpose. One is designed to help developers remove bias from AI projects, while the other two focus on securing neural networks and making their output explainable so that calculations can be verified.

The AI Fairness 360 Toolkit includes nearly a dozen algorithms for mitigating bias in different components of a machine learning project. The algorithms can help fix bias in the data an AI model processes, the model itself and the predictions that it provides as output. IBM has also added in a set of evaluation metrics for assessing the training dataset with which a neural network’s capabilities are honed during development.

The second project IBM is entrusting to the Linux Foundation is called the Adversarial Robustness 360 Toolbox. It enables developers to make their AI models more resilient against so-called adversarial attacks, a type of cyberattack wherein a hacker injects malicious input into a neural network to trigger an error. The project includes algorithms for hardening modes and pre-packaged attacks developers can employ to test their resilience.

The third toolkit, the AI Explainability 360 Toolkit, is aimed at addressing the fact that explaining why an AI makes a given decision is often difficult because of neural networks’ inherent complexity. Following the pattern of the other two projects, it includes pre-packaged algorithms for building explainability into a model. There are also code examples, guides and documentation.

The ability to explain how an AI reached a certain conclusion is a requite both to ensuring fairness and to verifying the security of a model. For developers working in these two areas, the AI Explainability 360 Toolkit could help complement the other projects IBM is donating to the Linux Foundation. 

“Donation of these projects to LFAI will further the mission of creating responsible AI-powered technologies and enable the larger community to come forward and co-create these tools under the governance of Linux Foundation,” IBM executives Todd Moore, Sriram Raghavan and Aleksandra Mojsilovic wrote in a blog post.

Raghavan, who heads the Research AI group inside IBM, appeared on SiliconANGLE Media’s video studio theCUBE in May. He discussed the company’s machine learning strategy and how the company is making AI explainability a priority in its work.

“We think of our AI agenda in three pieces: Advancing, trusting and scaling AI,” Raghavan detailed. “Trusting is building AI which is trustworthy, is explainable. You can control and understand its behavior, make sense of it and all of the technology that goes with it.”

Photo: IBM Espana/Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU