Laurent - stock.adobe.com
IBM donated three open source projects for responsible AI to the LF AI Foundation, part of the Linux Foundation.
IBM's donation of its AI Fairness 360, Adversarial Robustness 360 and AI Explainability 360 toolkits to the LF AI Foundation, made public Monday, further opens the IBM AI ethics projects to the community, placing control of them with the Linux Foundation.
The move comes a week after Databricks donated its open source machine learning tool MLflow to the Linux Foundation.
The Linux Foundation is a nonprofit technology consortium dedicated to protecting and advancing Linux, an open source operating system. The group provides support to numerous open source projects and communities. As part of the Linux Foundation, the LF AI Foundation supports open source projects in AI, machine learning and deep learning.
Open source projects, while open to the community, are still controlled by an individual or vendor. They individual or vendor can limit who works on the project, the direction the project takes and how quickly it updates.
With this latest move, IBM concedes control of the three AI toolkits to the vendor-neutral LF AI Foundation. Linux Foundation's open governance model, in theory, eliminates single-vendor control of open source projects, broadens the community working on them and helps accelerates their growth.
"Our goal as a foundation is to support these projects to grow their user and contributor base and help them sustain themselves as communities of developers coming from various organizations and companies under a neutral and vendor-free environment," said Ibrahim Haddad, LF AI Foundation executive director.
The LF AI Foundation plans to integrate the projects with its own internal efforts and support them with a variety of resources, including IT and infrastructure, creative work, events and marketing, Haddad said.
The toolkits, released over the course of the last couple years, enable users to create and use more transparent, responsible AI models.
The AI Fairness 360 toolkit is a collection of algorithms, codes and tutorials that enables developers to help detect and fix bias in training data. The AI Explainability 360 toolkit contains more explainable algorithms to help developers peel back the AI black box. The Adversarial Robustness 360 toolkit, meanwhile, is a Python library meant to help developers defend machine learning models against adversarial threats.
Ibrahim HaddadExecutive director, LF AI Foundation
The onboarding process for the three toolkits will take a few weeks, Haddad said. IBM first joined the LF AI Foundation last year.
The LF AI Foundation considers responsible AI "a very important topic and subdomain within the larger AI space and of high interest to our members," said Haddad.
"The new projects coming to LF AI from IBM will give us a major boost not just as technical projects but also help us establish ourselves in the subcategories of fairness, explainability and adversarial AI," he continued.
The contributions come as enterprises and AI vendors are calling for more responsible AI and more regulations around the technology.
"We have been planning to move these projects in an open governance model, and recent events across the globe have affirmed that this is right way to advance trusted AI, by moving these projects in a neutral space, with no IP or trademark concerns for end practitioners," said Animesh Singh, STSM and chief architect of Data and AI Open Source Platform at IBM.