Laurent - stock.adobe.com

UX defines chasm between explainable vs. interpretable AI

From deep learning to simple code, all algorithms should be transparent. The frameworks of AI interpretability and explainability aim to make machine learning understandable to humans.

AI explainability and AI interpretability are notions often used interchangeably, despite immense differences in intention and practical application. This can be fine for high-level conversations about how AI models align with enterprise goals, but data scientists and insightful managers may want to examine the distinctions in explainable vs. interpretable AI in order to execute a more extensive AI strategy.

At the heart of the difference between explainability and interpretability lies UX. Interpretability enables transparent AI models to be readily understood by users of all experience levels. Explainable AI applied to black box models means that data scientists and technical developers can provide an explanation as to why models behave the way they do -- and can pass the interpretation down to users.

Examining the differences

The spirit of both fields are the same: helping humans understand the why of a decision made by an AI/machine learning system, said David Fagnan, director of applied science on the Zillow Offers Analytics team.

Even among experts, these distinctions are outlined in different ways. In a technical context, AI interpretability is applied to the process of examining rules-based algorithms, while explainability is applied to the process of examining black box deep learning algorithms. In AI UX contexts, the distinction between explainability and interpretability relates to how AI problems are presented to different types of users or as part of different workflows.

Ultimately, the discussion about interpretability vs. explainability should start with why interpretability and explainability are important for various individuals, said Ryohei Fujimaki, CEO and founder of dotData, a data science automation platform.

For users, providing valid explanations for an AI or machine learning model boosts business user confidence in the reliability of the models. For developers, providing a high degree of transparency into AI and machine learning models is critical in helping developers defend the validity of their models and in providing explanations to decision-makers. For C-suite members, transparency in AI/machine learning models helps accountability and adherence to regulatory practices.

The technical side

In technical discussions, interpretable AI is about making sense of transparent AI models, while explainable AI is about making sense of black box models, said Nate Nichols, distinguished principal at Narrative Science, a natural language generation tools provider.

In a classic machine learning regression problem, the training data may consist of many loan applications and whether a human loan officer decided to give the loan. Transparent models are in a form that a human could follow exactly. A decision tree would represent explicit decisions, such as "IF total_net_worth < 20,000 THEN deny loan."

More complex black box models might include a statistical analysis of words associated with loan acceptance. For example, positive words might look like {"family wealth": 0.9462, "expansion": 0.84716…}, and negative words might look like {"bankruptcy": -0.89817", "default": "-0.85617}. After the model is trained, a person could look at that list of words and interpret what the system is looking for and how much it's weighting those words when considering a loan.

"This [interpretation] is typically done as a separate, out-of-band process because the underlying model isn't interpretable," Nichols said. Instead, the human creates a second model that is trying to determine and explain what the actual model is doing.

If the loan granting/denying process was modeled with deep learning, which is not itself explainable, this secondary model would consider that factors like a person's net worth and the size of the loan are important factors in deciding whether to grant a loan.

The human side

Discussions about explainable vs. interpretable AI shift focus when looking at how the AI UX fits into different workflows.

"Interpretability focuses on representation," said Mark Stefik, a research fellow at Palo Alto Research Center, a subsidiary of Xerox. "Explainability refers to a broader set of considerations in what makes an effective explanation for the human users of an AI system."

With interpretability, the goal is to translate the knowledge an AI system uses into a representation that is directly understandable by users. For example, if-then rules and Bayesian rules are said to be human-interpretable and can explain the reason a decision is made by an AI agent.

Two examples of interpretable representations are decision trees showing an AI's logic and visualizations showing how an AI's behavior varies according to elements in the task situation. When decision trees are used as an interpretable representation, the approach expects that people can trace the logic of the decision tree and then understand what an AI will do in various situations.

Explainability describes the process of making interpretations accessible to users. Self-explanation approaches can interact with users to determine what information is needed to satisfy the information needs of end users, Stefik said. Explanatory information is typically presented in the context of an interactive dialog and is selected to answer user inquiries. For example, in response to a user asking, "What options did you consider?" and "Why did you choose this one?" a self-explaining system could present alternative courses of action that were considered in performing the task and then compare their relative advantages and disadvantages.

Different kinds of users

The differences between interpretability can be important for AI developers customizing the way AI applications are configured to interact with different types of users. Chris Butler, chief product architect at IPsoft, an IT automation tools provider, finds explainability more useful for describing interactions with experts and interpretability more useful for describing interactions with nonexpert users.

"Explainability is a measure of how much an expert who understands all parts of a system would make decisions about that system," Butler said. The goal of explainability is an understanding so deep that the expert can build, maintain and even debug it, as well as use it.

Interpretability, on the other hand, gauges how well any user -- especially a nonexpert -- would understand why and what the system is doing either overall or for a particular case. What the user understands might be far from perfect, but when the system doesn't make sense, the user needs to be able to intervene to change the result.

"Humans can't possibly know everything about the way an organization works, and they don't have to with interpretability," Butler explained.

Explainable and interpretable AI tools

Machine learning platforms are starting to include some explainability and interpretability features.

Automated machine learning 2.0 platforms, like dotData, combine automated creation and discovery of features with natural language explanations of features to make models easier to understand and to make the highly complex statistical formulas easier to interpret.

SAS Platform uses a dedicated Insights tab that is dedicated to explainability of models. This feature uses popular frameworks, such as PD (Professional Development), LIME (Local Interpretable Model-Agnostic Explanations), ICE (Impact, Confidence, Ease) and Kernel SHAP (Shapley Additive Explanations), to help interpret the models and then uses natural language processing to explain the results in simple language.

Other companies like Fiddler are building third-party tools that bring explainability to deep learning models built with other tools.

In addition, a number of open source tools for implementing explainable and interpretable AI are starting to emerge, including IBM's AI Explainability 360, Microsoft InterpretML, SHAP and Seldon's Alibi.

Opening the black box

Discussions about explainable vs. interpretable AI are important when data scientists are choosing among different algorithms. Transparent models are better from development, deployment and privacy perspectives, but black box models perform better, Nichols said. This is not right in practice, as there are specific problems where black box models perform better, like machine translation, speech to text/text to speech and robotics.

"If the performance is similar, then the advantages of the transparent approach generally win out," Nichols said. It's easier to inspect more transparent approaches to confirm they're picking up sensible features, and it's always better to be able to answer questions like, "Why did the model make this particular decision?"

Nichols believes that developers should shy away from applying deep learning to every problem, even though it's been successful in a lot of different contexts. This can result in various negative consequences for the users of the system when people don't understand why an AI system makes the decisions it does.

"As more transparent methods continue to advance and improve, I expect they will catch up to deep learning on problems where humans make conscious decisions, such as hiring, granting loans and determining parole status," Nichols said.

Dig Deeper on Machine learning platforms

Business Analytics
CIO
Data Management
ERP
Close