jro-grafik - Fotolia

Combination of blockchain and AI makes models more transparent

Blockchain technology could play an important role in helping enterprises develop more explainable AI applications, something that is frequently lacking today.

The rapid emergence and use of AI in mission- and business-critical situations is leading management teams to consider issues of trust. This is especially true when machine learning systems are used to make decisions that have particularly serious ramifications.

For example, the issue arises with autonomous vehicles that need to make split-second decisions, with AI-based systems that make loan or insurance claim decisions, and with tools used to sentence convicts or determine bail amounts.

In each of these situations and others, humans are trusting machines to make decisions that can have significant and potentially harmful outcomes. But while their decision-making processes are opaque today, there is hope that the combination of blockchain and AI could make learning systems more interpretable.

The demand for explainable AI systems

It's no wonder that those implementing, managing and governing AI systems are asking that those systems provide some sort of inherent verification method to understand how decisions are made. However, the emergence of deep learning approaches is causing problems in this regard.

Part of the power of deep learning neural nets is their ability to create probabilistic pathways of associations between a given input -- an image, for example -- and the desired output, such as recognizing the image as a cat. However, the exact connection between these inputs and outputs is hard for humans to examine and understand. Indeed, in a recent article in the MIT Technology Review, top AI scientists lament how they don't really know why these algorithms work so well or how they can be improved.

As such, AI leaders are demanding that systems that depend on machine learning-based decision-making have explainability built in as a core requirement and feature of these systems.

The United States Defense Advanced Research Projects Agency (DARPA) launched an explainable AI initiative with at least 11 different research agencies, contractors and institutions that aims to "produce more explainable models, while maintaining a high level of ... prediction accuracy ... and enable human users to understand, appropriately trust and effectively manage the emerging generation of artificially intelligent partners."

Yet, while this project has been ongoing since 2016, the initial deliverables and first proof of concept aren't planned for at least another 18 months, and something that can be widely implemented isn't planned until 24 months thereafter. Autonomous vehicle manufacturers, insurance companies and others looking to make use of trustworthy, explainable AI systems are not willing to wait that long.

AI and blockchain: Super hype or super useful?

Like AI, another technology currently riding the hype cycle is blockchain. Popularized by cryptocurrencies, blockchain offers an immutable, distributed, decentralized ledger of transactions that can be used in a wide range of use cases.

In a blockchain, records of transactions are distributed to decentralized individual transaction registries where each transaction is linked to the one previous to it, and compute-intensive verification steps combined with overall system consensus are required to verify the legitimacy of a transaction. This offers a unique combination of decentralized control, trust and verifiability that promises to disrupt systems where trustworthiness is required in party-to-party transactions.

For example, machine learning company SingularityNet is looking at using blockchain as a way of recording the decisions that machine learning systems make and sharing the enhanced learning of the trained neural network models with all the participants on the blockchain.

There are some key benefits to this approach. First, incremental learning is explainable in that the decisions are propagated on the network. The decisions are immutable because, in the case of an incident, no third-party can alter the records. Additionally, because the blockchain ledger is decentralized, no single party has authority or control over it, eliminating the specter of vendors owning the infrastructure and locking out rivals.

Blockchain and AI: Secure, explainable, transparent

Another reason companies are eyeing the combination of blockchain and AI is to increase the security of systems by placing key aspects of decision-making in the blockchain rather than individual machine learning systems. What this means is that when a system using machine learning makes a decision, all the factors that went into that decision, along with the decision itself, are posted to a blockchain, which is shared with all the parties. In this way, if something goes wrong, the blockchain can be inspected along with the decisions used to identify the root cause of any failure or problem.

In addition, the use of smart contracts to automatically execute tasks when certain preconditions are met on the blockchain offers promise in many AI-specific scenarios. Companies in the healthcare and insurance industry are looking at smart contracts to approve claims or share information in a manner that is verifiable and trusted. Rather than the decision-making code sitting in a black box on a server in a company's data center, the specific decision logic is stored as a smart contract in a blockchain that can be inspected and verified by all parties.

The downside to using blockchain is its inherent inefficiency. Blockchain protocols are not optimized for speed or compute efficiency. This is actually a feature of the system, not a bug. As such, blockchain systems can't be placed into the execution stream of decision-making without slowing down the process.

The result is that a blockchain system used in AI becomes a sort of log that records what has happened and can be reviewed after the fact, but that can't be used to explain something before or while it is happening. Human operators won't be able to inspect a decision before it happens unless the decision can be made with enough time for the blockchain to record the activity and for the human to approve or otherwise modify the decision.

Blockchain-enabled, explainable AI as applied to healthcare

Despite the early and somewhat hype-prone nature of the combination of blockchain and AI, companies like IBM are applying it to healthcare and other areas that demand a high degree of explainability and verifiability.

IBM Watson Health chief science officer Shahram Ebadollahi announced at last year's Fast Company Innovation Festival that the company would be partnering with the Centers for Disease Control and Prevention and the Food and Drug Administration (FDA) to add blockchain capabilities to IBM Watson AI initiatives focused on healthcare.

"The new partnership will complement an existing collaboration between IBM Watson and the FDA, giving IBM additional insight into how blockchain, in tandem with artificial intelligence, could overhaul the way stakeholders extract meaning from the overwhelming volume of big data in the industry," Ebadollahi said.

With big enterprises, vendors and government agencies entering the market with tools aimed at enhancing explainability, perhaps the industry will beat DARPA to the punch on a realistic approach for explainable AI leveraging blockchain and other approaches. Companies and industries that can't afford to use black box technology in their critical decision-making processes have little choice if they want the power and usefulness that AI promises.

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close