As more push for explainable AI, companies add features

As businesses demand more explainable AI, vendors such as IBM and DarwinAI are introducing new features to help break down for business users how AI systems work.

Machine learning and deep learning systems are becoming more advanced and able to aid in complex decision-making processes. Yet, as these systems get better, they also become harder to understand.

The better systems, then, are both helpful and troublesome to organizations. An advanced AI system might provide actionable insights, but might not be able to detail how it arrived at those insights, leaving business leaders hesitant to follow AI-powered recommendations. The insights could help drive profits, or they could lead a company in the wrong directionithout seeing how a system works, it's hard or even impossible for executives to know.

So, as organizations push for explainable AI, companies that create and sell AI-driven products are working to make their products both easier to use and easier to understand.

Explainability toolkits

Over the last few years, some AI vendors, including H2O.ai, IBM and DarwinAI, have released explainability features in or alongside their products.

In August, IBM introduced AI Explainability 360, an open source toolkit, an extension of its AI management platform, Watson OpenScale. The toolkit features several interpretable algorithms with a common interface, and includes training materials and tutorials.

"We wanted to make sure that whatever we do in this space really helps everyone," said Dinesh Nirmal, vice president of development for IBM Data and AI, of IBM's explainable AI efforts.

explainable ai companies, explainable AI
AI vendors are adding features to help offers explanations on how machine learning and deep learning models work

Meanwhile, DarwinAI, an Ontario, Canada-based AI vendor that claims it uses deep learning to optimize existing AI systems, released an AI explainability toolkit in 2018.

DarwinAI's "generative synthesis platform" ingests, understands and optimizes existing AI systems, said Sheldon Fernandez, CEO of DarwinAI.

It makes existing models "smaller, faster and more efficient," he said.

The explainability toolkit builds on the core system. It moves below high-level optimization recommendations and gives recommendations for specific tasks, offering developers detailed breakdowns on how those tasks work.

Need for explainable AI

As for explainable AI, companies that buy AI-driven products look for explainability at a business level, said Arnab Chakraborty, Global managing director of applied intelligence for the U.S. West Coast region at Accenture.

A lot of our clients have to explain to stakeholders how models work.
Arnab ChakrabortyApplied intelligence lead, West Coast region of Accenture

"A lot of our clients have to explain to stakeholders how models work," he said. So, the models need to be transparent.

Accenture, a multinational professional services firm headquartered in Dublin, Ireland, has its own toolkit to help explain how AI systems work.

The toolkit helps the company lay out different parameters that go into a model and how that model influences different APIs, among other things, Chakraborty said.

Companies that sell AI-driven technologies or develop them for in-house use should develop or subscribe to guidelines about AI ethics issues, including AI bias and explainability, said Traci Gusher, principal of data and analytics at KPMG.

Those guidelines should demand that AI systems are "human-centric and fair," she said, adding that it's important that both business leaders and employees trust an AI system.

Establishing guidelines now can also save future headaches, Gusher said, especially as governments move to put more AI ethics laws in place.

"When legislation comes, you'll be more prepared for it," she said.

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close