IBM donates machine learning tooling to enable ‘responsible’ AI
IBM Corp. is donating three open-source artificial intelligence development toolkits to LF AI, an organization within the Linux Foundation that maintains open-source machine learning tools.
The LF AI Technical Advisory Committee formally approved the move earlier this month. IBM is currently in the process of transferring the projects to the organization.
The three toolkits each serve a different purpose. One is designed to help developers remove bias from AI projects, while the other two focus on securing neural networks and making their output explainable so that calculations can be verified.
The AI Fairness 360 Toolkit includes nearly a dozen algorithms for mitigating bias in different components of a machine learning project. The algorithms can help fix bias in the data an AI model processes, the model itself and the predictions that it provides as output. IBM has also added in a set of evaluation metrics for assessing the training dataset with which a neural network’s capabilities are honed during development.
The second project IBM is entrusting to the Linux Foundation is called the Adversarial Robustness 360 Toolbox. It enables developers to make their AI models more resilient against so-called adversarial attacks, a type of cyberattack wherein a hacker injects malicious input into a neural network to trigger an error. The project includes algorithms for hardening modes and pre-packaged attacks developers can employ to test their resilience.
The third toolkit, the AI Explainability 360 Toolkit, is aimed at addressing the fact that explaining why an AI makes a given decision is often difficult because of neural networks’ inherent complexity. Following the pattern of the other two projects, it includes pre-packaged algorithms for building explainability into a model. There are also code examples, guides and documentation.
The ability to explain how an AI reached a certain conclusion is a requite both to ensuring fairness and to verifying the security of a model. For developers working in these two areas, the AI Explainability 360 Toolkit could help complement the other projects IBM is donating to the Linux Foundation.
“Donation of these projects to LFAI will further the mission of creating responsible AI-powered technologies and enable the larger community to come forward and co-create these tools under the governance of Linux Foundation,” IBM executives Todd Moore, Sriram Raghavan and Aleksandra Mojsilovic wrote in a blog post.
Raghavan, who heads the Research AI group inside IBM, appeared on SiliconANGLE Media’s video studio theCUBE in May. He discussed the company’s machine learning strategy and how the company is making AI explainability a priority in its work.
“We think of our AI agenda in three pieces: Advancing, trusting and scaling AI,” Raghavan detailed. “Trusting is building AI which is trustworthy, is explainable. You can control and understand its behavior, make sense of it and all of the technology that goes with it.”
Photo: IBM Espana/Flickr
Since you’re here …
Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!
Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.
… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.