responsible AI
Responsible AI is a governance framework that documents how a specific organization is addressing the challenges around artificial intelligence (AI) from both an ethical and legal point of view. Resolving ambiguity for where responsibility lies if something goes wrong is an important driver for responsible AI initiatives.
As of this writing, the development of fair, trustworthy AI standards is up to the discretion of the data scientists and software developers who write and deploy a specific organization's AI algorithmic models. This means that the steps required to prevent discrimination and ensure transparency vary from company to company.
Just as ITIL provided a common framework for delivering IT services, proponents of responsible AI hope that a widely adopted governance framework of AI best practices will make it easier for organizations around the globe to ensure their AI programming is human-centered, interpretable and explainable.
In a large enterprise, the chief analytics officer (CAO) is typically tasked with developing, implementing and monitoring the organization's responsible AI framework. Typically documented on the organization's website, the framework explains in simple language how the organization addresses accountability and ensures their use of AI is anti-discriminatory.
Why responsible AI is important
Responsible AI is an emerging area of AI governance and use of the word "responsible" is an umbrella term that covers both ethics and democratization.
The heads of Microsoft and Google have publicly called for AI regulations, but as of this writing, there are no standards for accountability when AI programming creates unintended consequences. Often, bias can be introduced into AI by the data that's used to train machine learning models. When the training data is biased, it naturally follows that decisions made by the programming are also biased.
Now that software programs with artificial intelligence (AI) features are becoming more common, it is increasingly apparent that there is a need for standards in AI beyond those established by Isaac Asimov in his "Three Laws of Robotics." The technology can be misused accidentally (or on purpose) for a number of reasons -- and much of the misuse is caused by a bias in the selection of data to train AI programming.
What are the principles of responsible AI?
AI and the machine learning models that support it should be comprehensive, explainable, ethical and efficient.
- Comprehensiveness – comprehensive AI has clearly defined testing and governance criteria to prevent machine learning from being hacked easily.
- explainable AI is programmed to describe its purpose, rationale and decision-making process in a way that can be understood by the average end user.
- Ethical AI initiatives have processes in place to seek out and eliminate bias in machine learning models.
- Efficient AI is able to run continually and respond quickly to changes in the operational environment.
Why is responsible AI important?
An important goal of responsible AI is to reduce the risk that a minor change in an input's weight will drastically change the output of a machine learning model.
Within the context of conforming to the four tenets of corporate governance, responsible AI should be:
- Each step of the model development process should be recorded in a way that cannot be altered by humans or other programming.
- The data used to train machine models should not be biased.
- The analytic models that support an AI initiative can be adapted to changing environments without introducing bias.
- The organization deploying AI programming is sensitive to its potential impact -- both positive and negative.
How do you design responsible AI?
Building a responsible AI governance framework can be a lot of work. Ongoing scrutiny is crucial to ensure an organization is committed to providing an unbiased, trustworthy AI. This is why it is crucial for an organization to have a maturity model or rubric to follow while designing and implementing an AI system.
At a base level, to be considered responsible, AI must be built with resources and technology according to a company-wide development standard that mandates the use of:
- Shared code repositories
- Approved model architectures
- Sanctioned variables
- Established bias testing methodologies to help determine the validity of tests for AI systems
- Stability standards for active machine learning models to make sure AI programming works as intended

Implementation and how it works
It can be difficult to demonstrate whether an algorithmic model is performing well from a responsibility standpoint. Today, organizations have many ways they can implement responsible AI and demonstrate that they have eliminated black box AI models. Current strategies include the following:
- Ensure data is explainable in a way that a human can interpret
- Ensure design and decision-making processes are documented to the point where if a mistake occurs, it can be reverse-engineered to determine what transpired.
- Build a diverse work culture and promote constructive discussions to help mitigate bias.
- Use interpretable latent features to help create human-understandable data.
- Create a rigorous development process that values visibility into each application's latent features.
Responsible AI use in blockchain
Besides being useful for transactional data, a distributed ledger can be a valuable tool for creating a tamper-proof record that documents why a machine learning model made a particular prediction. That's why some companies are using blockchain, the popular distributed ledger used for the cryptocurrency bitcoin, to document their use of responsible AI.
With blockchain, each step in the developing process -- including who made, tested and approved each decision -- is recorded in a human-readable format that can't be altered.
Best practices for responsible AI
When designing responsible AI, governance processes need to be systematic and repeatable. Some methods for best practices include:
- Implement machine learning best practices.
- Creating a diverse culture of support. This includes creating gender and racially diverse teams that work on creating responsible AI standards. Organizational structures should ensure any review committees be cross-functional within the organization. Enable this culture to speak freely on ethical concepts revolving around AI and bias.
- Make a best effort towards transparency so that any decisions made by AI are explainable.
- Design for responsibility. Review for responsibility early in development.
- Make the work as measurable as possible. Dealing with responsibility can be subjective at times, so making sure there are measurable processes in place such as visibility, explainability and having an auditable technical framework, or having an ethical framework is key.
- Use responsible AI tools to inspect AI models. Options such as explainable AI and the TensorFlow toolkit are available. In addition, perform tests such as bias testing or Predictive maintenance.
- Stay mindful and learn from the process. An organization will learn more about responsible AI in implementation as they go; from fairness practices to technical references and materials surrounding technical ethics.
Examples of companies embracing responsible AI
Microsoft has created its own responsible AI governance framework with help from their AI, Ethics, and Effects in Engineering and Research (AETHER) Committee and Office of Responsible AI (ORA) groups. These two groups work together within Microsoft to spread and uphold their defined responsible AI values. ORA, specifically, is responsible for setting company-wide rules for responsible AI, through the implementation of governance and public policy work. Microsoft has implemented a number of responsible AI guidelines, checklists and templates. Some of these include:
- Human-AI interaction guidelines
- Conversational AI guidelines
- Inclusive design guidelines
- AI fairness checklists
- Templates for data sheets
- AI security engineering guidance
FICO has created responsible AI governance policies to help their employees and customers understand how the machine learning models the company uses work, as well as what the programming's limitations are. FICO's data scientists are tasked with considering the entire lifecycle of their machine learning models and are constantly testing their effectiveness and fairness. FICO has developed several methodologies and processes for bias detection, including:
- Building, executing and monitoring explainable models for AI
- Using blockchain as a governance tool for documenting how an AI model works
- Sharing an explainable AI toolkit with employees and clients
- Comprehensive testing for bias
IBM has its own ethics board dedicated to the issues surrounding artificial intelligence. The IBM AI Ethics Board is a central body that supports the creation of ethical and responsible AI throughout IBM. Some guidelines and resources IBM focuses on include:
- AI trust and transparency
- Everyday ethics for AI
- Open source community resources
- Research into trusted AI