Browse Definitions :
Top programs for studying artificial intelligence conversational AI
Definition

explainable AI (XAI)

What is explainable AI (XAI)?

Explainable AI (XAI) is artificial intelligence (AI) that's programmed to describe its purpose, rationale and decision-making process in a way that can be understood by the average person. XAI helps human users understand the reasoning behind AI and machine learning (ML) algorithms to increase their trust.

Explainable artificial intelligence is often discussed in relation to deep learning and plays an important role in the FAT -- fairness, accountability and transparency -- ML model. XAI is useful for organizations that want to build trust when implementing an AI. XAI can help them understand an AI model's behavior, helping to find potential issues such as AI biases.

ML models are typically structured in a white box or black box format. White box models provide more visibility and understandable results to users and developers, whereas AI decisions or predictions made by black box models are extremely hard to explain, even for AI developers.

XAI provides general data focused on how an AI program makes a decision by disclosing the following:

  • The program's strengths and weaknesses.
  • The specific criteria the program uses to arrive at a decision.
  • Why a program makes a particular decision as opposed to alternatives.
  • The level of trust appropriate for various types of decisions.
  • What types of errors the program is prone to.
  • How errors can be corrected.
Illustration showing black box AI vs. white box XAI
Explainable AI (XAI) provides more trust than traditional back-box styled AI, as it offers more visibility and reasoning into how it makes its decisions.

Importance of explainable AI

An important goal of XAI is to provide algorithmic accountability. AI systems used to be predominantly black boxes. Even if the inputs and outputs are known, the algorithms used to arrive at a decision are often proprietary or aren't easily understood.

With AI services being integrated into fields such as health IT or mortgage lending, it's important to ensure decisions made by AI systems are sound and trustable. For example, due to AI bias, an individual could be denied a mortgage. Likewise, an AI-based system can't accurately help medical professionals make objective decisions if the data set it was trained on isn't diverse enough. Without having proper insight into how the AI is making its decisions, it can be difficult to monitor, detect and manage these types of issues.

As AI becomes increasingly prevalent, it's more important than ever to disclose how bias and the question of trust are being addressed.

Examples of explainable AI

XAI can be found in the following industries:

  • Healthcare. Explainable AI systems that aid in patient diagnosis can help build trust between doctor and system, as the doctor can understand where and how the AI system reaches a diagnosis.
  • Financial. XAI is used to approve or deny financial claims such as loans or mortgage applications, as well as to detect financial fraud.
  • Military. Military AI-based systems need to be explainable to build trust between service people and any AI-enabled equipment they rely on for safety.
  • Autonomous vehicles. XAI is used in autonomous vehicles to explain driving-based decisions, especially those that revolve around safety. If a passenger can understand how and why the vehicle is making its driving decisions, they can feel safer knowing what scenarios the vehicle can or can't handle well.

XAI is especially important in areas where someone's life could be immediately affected. For example, in healthcare, AI could be used to identify patient fractures based on X-rays. Even after an initial investment in an AI tool, doctors and nurses might still not be ready to adopt the AI if they don't trust the system or know how it arrives at a patient diagnosis. An explainable system gives healthcare providers the chance to review the diagnosis and to use that information to inform their own prognosis.

Likewise, for military operations, the Defense Advanced Research Projects Agency (DARPA) is developing XAI in its third-wave AI systems. These three waves described by DARPA include AI that can categorize and explain.

Benefits of explainable AI

XAI provides overall more accountability and transparency in AI systems. Its benefits include the following:

  • Makes AI more trustworthy. Individuals might be reluctant to trust an AI-based system, as they can't tell how it reaches a particular conclusion. XAI is designed to give understandable explanations of its decisions to end users.
  • Improves the overall AI system. With added transparency, developers can more easily identify and fix issues.
  • Provides insight against adversarial attacks. Adversarial attacks attempt to fool or misguide a model into making incorrect decisions using maliciously designed data inputs. An adversarial attack against an XAI system would show irregular explanations for its decisions, revealing the attack.
  • Safeguards against AI bias. The goal of XAI is to explain attributes and decision processes in ML algorithms. This helps identify unfair outcomes due to the lack of quality in training data or developer biases.

Limitations of explainable AI

XAI also has the following limitations:

  • Explainability compared to other transparency methods. An XAI system might oversimplify and misrepresent a complicated system, leading to a debate about designing AI systems with more interpretable models, or models that can more accurately associate causes to effects.
  • Model performance. XAI systems typically have lower performance when compared to black box models.
  • Difficulty in training. Creating an AI system that also explains its reasoning is more complicated to achieve when compared to black box models.
  • Privacy. If an XAI system works with confidential data, that data could be exposed because of XAI's transparent nature.
  • Concepts of understanding and trust. Although XAI should lead to an increased trust in AI, some users might still not trust the system, even with an understandable explanation behind its decisions.

Learn more about why transparency in AI matters for organizations.

This was last updated in June 2023

Continue Reading About explainable AI (XAI)

Networking
Security
  • Mitre ATT&CK framework

    The Mitre ATT&CK (pronounced miter attack) framework is a free, globally accessible knowledge base that describes the latest ...

  • timing attack

    A timing attack is a type of side-channel attack that exploits the amount of time a computer process runs to gain knowledge about...

  • privileged identity management (PIM)

    Privileged identity management (PIM) is the monitoring and protection of superuser accounts that hold expanded access to an ...

CIO
HRSoftware
  • employee resource group (ERG)

    An employee resource group is a workplace club or more formally realized affinity group organized around a shared interest or ...

  • employee training and development

    Employee training and development is a set of activities and programs designed to enhance the knowledge, skills and abilities of ...

  • employee sentiment analysis

    Employee sentiment analysis is the use of natural language processing and other AI techniques to automatically analyze employee ...

Customer Experience
  • customer profiling

    Customer profiling is the detailed and systematic process of constructing a clear portrait of a company's ideal customer by ...

  • customer insight (consumer insight)

    Customer insight, also known as consumer insight, is the understanding and interpretation of customer data, behaviors and ...

  • buyer persona

    A buyer persona is a composite representation of a specific type of customer in a market segment.

Close