Responsible AI: Executive QnA
Now that artificial intelligence (AI) affects just about every industry, people have begun to express concern about the dangers of black box AI.

Scott Zoldi, the Chief Analytics Officer at FICO, caught our interest when he wrote about a concept called Responsible AI. We asked Scott ten questions about this emerging standard of excellence and the role blockchain is expected to play in it.
TechTarget: Scott, can you please explain to our readers what Responsible AI is? We’ve seen it referred to as a standard, a framework, a goal, a cultural shift and a practice. Is it all of those things?
Scott Zoldi: I think of Responsible AI as a standard for excellence. It requires a company’s board of directors to approve and support the use of a technical framework that ensures an organization’s AI implementations are safe, trustworthy and unbiased.
TechTarget: What is driving the adoption of Responsible AI standards?
Scott Zoldi: The increasing magnitude of AI’s life-altering decisions is underscoring the urgency with which AI fairness, bias and governance has to be ushered onto board agendas. In early days of AI, data scientists were primarily concerned with aggregating enough data to make a machine learning model predict outcomes within an acceptable level of risk.
Now that getting enough data to make accurate predictions isn’t the biggest challenge, we’ve learned through (sometimes painful) experiences that it’s imperative for AI to be explainable first and predictive second.
When machine learning models are not built responsibly, they tend to become overly complicated -- and when that happens, poor outcomes can negatively impact people in ways that seem to defy common sense.
TechTarget: What are some of the barriers to achieving Responsible AI?
Scott Zoldi: In the enterprise, the most significant barrier to achieving Responsible AI is that most boards of directors and CEOs don’t have a sufficient understanding of statistical analytics. Even when there is enthusiasm for Responsible AI at the board level, there is often a distinct knowledge and skills gap between theory and execution.
Conversely, most data scientists are not reflexively guided by a governance orientation. In order for things to change, analytic teams need to be led in a way that integrates governance into the way they work and avoids black box machine learning algorithms.
TechTarget: What’s the first step a company should take when implementing Responsible AI?
Scott Zoldi: One of the most important steps in implementing Responsible AI is to become deeply sensitive as to how an organization’s deep learning algorithms will ultimately impact real people downstream.
Was a borrower invisibly discriminated against and denied a loan because of poor AI programming? Was a patient’s disease incorrectly diagnosed? Was a citizen unjustly arrested for a crime he did not commit?
These are legitimate questions that people should be able to get answers to.
TechTarget: What should our readers know before they can claim to be using Responsible AI?
Scott Zoldi: Responsible AI requires a robust development methodology that includes:
- Proper use of historical, training and testing data.
- Well-defined metrics for acceptable performance.
- Careful model architecture selection.
- Processes for model stability testing, simulation and governance.
Perhaps most importantly, all of the above factors must be adhered to by the entire data science organization and the AI’s explainability should be non-negotiable.
TechTarget: Why is explainability so important? What role does it play in Responsible AI?
Scott Zoldi: Model explainability is crucial. In fact, it should be the primary goal of Responsible AI deployments, followed secondarily by predictive power. While the equations that make up machine learning algorithms are often straightforward, it can often be difficult to derive a human-understandable interpretation for how latent features in a large data set have been weighted.
In changing environments, especially, latent features should continually be checked for bias. Here at FICO, we’ve developed a machine learning technique called Interpretable Latent Features to help overcome this challenge.
TechTarget: Does this technique also address ethics in Responsible AI?
Scott Zoldi: Yes, I think so. From a data scientist’s point of view, ethical AI is achieved by taking precautions to expose the underlying machine learning model and continually testing to see if it could impute bias. When there’s a rigorous development process, coupled with visibility into latent features, it helps ensure that analytics models function both ethically and efficiently.
TechTarget: Is efficiency a challenge?
Scott Zoldi: It certainly can be if the development methodology isn’t robust. Efficient AI simply means “building it right the first time.” To be efficient, machine learning models have to be built according to a company-wide development standard that mandates the use of:
- Shared code repositories.
- Approved model architectures.
- Sanctioned variables.
- Established bias testing.
- Stability standards for active models.
These things are important because efficient AI allows data scientists to predict how the model will respond when a change occurs, determine how likely the model is to remain unbiased and trustworthy, and decide what strategies to use if the model needs to be adjusted. Answers to these questions can be codified by using blockchain governance.
TechTarget: Are you saying that blockchain – the technology we usually associate with Bitcoin -- can be used to support Responsible AI?
Scott Zoldi: Yes. Responsible AI will increase trust by assuring stakeholders that the artificial intelligence an organization has rolled out conforms with the four classic tenets of corporate governance: accountability, fairness, transparency and responsibility.
Accountability in AI can only be achieved if each step of the model development process is recorded in an immutable way that cannot be altered or destroyed. This is where blockchain comes in – when decisions about how to implement AI are recorded through a blockchain, they are tamper-proof and the record remains permanently accessible.
When blockchain is part of an organization’s standard for implementing Responsible AI, the mathematical algorithms and data used to train and test AI are stored in a distributed ledger, and that information can be made available to any authorized entity. Should the need arise, the blockchain can provide transparency into how a specific type of AI programming was trained and tested.
The wonderful thing about blockchain technology is that it makes it possible for anyone to see each decision that’s been made about a machine learning model – including who made it, who tested it and who approved it. This transparency into process is an important requirement for using analytic models in rapidly changing environments without introducing bias.
TechTarget: Who is keeping (or should be keeping) Responsible AI top of mind?
Scott Zoldi: In a large enterprise, the chief analytics officer is ideally positioned to set Responsible AI standards. If an organization doesn’t have a CAO, it should be another, analytically capable c-level executive.
Because AI bias affects all levels of an organization, from data scientists to consumers, Responsible AI needs to be supported by strong AI governance policies and the standards for excellence need to be strictly enforced by the organization’s CEO.
Scott Zoldi is the Chief Analytics Officer at FICO. Zoldi has a Ph.D. in theoretical and computational physics from Duke University, and currently holds 110 authored patents -- 57 granted and 53 in process.