TechTarget.com/searchenterpriseai

https://www.techtarget.com/searchenterpriseai/definition/responsible-AI

What is responsible AI?

By Cameron Hashemi-Pour

Responsible AI is an approach to developing and deploying artificial intelligence (AI) from both an ethical and legal point of view. The goal of responsible AI is to use AI in a safe, trustworthy and ethical fashion. Using AI responsibly should increase transparency and fairness, as well as reduce issues such as AI bias.

Proponents of responsible artificial intelligence believe a widely adopted governance framework of AI best practices makes it easier for organizations to ensure their AI programming is human-centered, interpretable and explainable. However, ensuring trustworthy AI is up to the data scientists and software developers who write and deploy an organization's AI models. This means the steps required to prevent discrimination and ensure transparency vary from company to company.

Implementation also differs among companies. For example, the chief analytics officer or other dedicated AI officers and teams might be responsible for developing, implementing and monitoring the organization's responsible AI framework. Organizations should have an explanation on their website of their AI framework that documents its accountability and ensures the organization's AI use isn't discriminatory.

Why responsible AI is important

Responsible AI is an emerging area of AI governance. The use of the word responsible is an umbrella term that covers both ethics and AI democratization.

Often, the data sets used to train machine learning (ML) models used in AI introduce bias. Bias gets into these models in one of two ways: incomplete or faulty data, or from the biases of those training the ML model. When an AI program is biased, it can end up negatively affecting or hurting humans. For example, it can unfairly decline applications for financial loans or, in healthcare, inaccurately diagnose a patient.

As software programs with AI features become more common, it's apparent there's a need for standards in AI beyond the three laws of robotics established by science fiction writer Isaac Asimov.

The implementation of responsible AI can help reduce bias, create more transparent AI systems and increase user trust in those systems.

What are the principles of responsible AI?

AI and machine learning models should follow a list of principles that might differ from organization to organization.

For example, Microsoft and Google both follow their own list of principles. In addition, the National Institute of Standards and Technology (NIST) has published a 1.0 version of its Artificial Intelligence Risk Management Framework that follows many of the same principles found in Microsoft and Google's lists. NIST's seven principles include the following:

  1. Valid and reliable. Responsible AI systems should be able to maintain their performance in different and unexpected circumstances without failure.
  2. Safe. Responsible AI must keep human life, property and the environment safe.
  3. Secure and resilient. Responsible AI systems should be secure and resilient against potential threats, such as adversarial attacks. Responsible AI systems must be built to avoid, protect against and respond to attacks, while also being able to recover from them.
  4. Accountable and transparent. Increased transparency is meant to build trust in the AI system, while making it easier to fix problems associated with AI model outputs. This principle requires that developers take responsibility for their AI systems.
  5. Explainable and interpretable. Explainability and interpretability are meant to provide in-depth insights into the functionality and trustworthiness of an AI system. For example, explainable AI tells users why and how the system got to its output.
  6. Privacy-enhanced. The privacy principle enforces practices that safeguard end-user autonomy, identity and dignity. Responsible AI systems must be developed and deployed with values, such as anonymity, confidentiality and control.
  7. Fair with harmful bias managed. Fairness focuses on eliminating AI bias and discrimination. It attempts to ensure equality and equity, a difficult task as these values differ among organizations and their cultures.

How do you design responsible AI?

AI models should be created with concrete goals that focus on building a model in a safe, trustworthy and ethical way. Ongoing scrutiny is crucial to ensure an organization is committed to providing unbiased, trustworthy AI technology. To do this, an organization must follow a maturity model while designing and implementing an AI system.

At a base level, responsible AI is built around development standards that focus on the principles for responsible design. These company-wide AI development standards should include the following mandates:

Responsible AI implementation and how it works

An organization can implement responsible AI and demonstrate that it has created a responsible AI system in the following ways:

What are the key challenges in implementing responsible AI?

Responsible AI policies and frameworks aren't always easy to implement, because key challenges and concerns often slow down the process. These challenges include the following:

Best practices for responsible AI principles

When designing responsible AI, development and deployment processes need to be systematic and repeatable. Some best practices include the following:

Best practices for responsible AI governance

From an oversight perspective, organizations should have an AI governance policy that's reusable for each AI system they develop or implement. Governance policies for responsible AI should include the following best practices:

Examples of companies embracing responsible AI

Among the companies pursuing responsible AI strategies and use cases are Microsoft, FICO and IBM.

Microsoft

Microsoft has created a responsible AI governance framework with help from its AI, Ethics and Effects in Engineering and Research Committee and Office of Responsible AI (ORA) groups. These two groups work together within Microsoft to spread and uphold responsible AI values.

ORA is responsible for setting company-wide rules for responsible AI through the implementation of governance and public policy work. Microsoft has implemented several responsible AI guidelines, checklists and templates, including the following:

FICO

Credit scoring organization FICO has created responsible AI governance policies to help its employees and customers understand how the ML models the company uses work and their limitations. FICO's data scientists are tasked with considering the entire lifecycle of its ML models. They're constantly testing their effectiveness and fairness. FICO has developed the following methodologies and initiatives for bias detection:

IBM

IBM has its own ethics board dedicated to issues surrounding artificial intelligence. The IBM AI Ethics Board is a central body that supports the creation of responsible and ethical AI throughout the company. Guidelines and resources IBM uses include the following:

Blockchain's role in responsible AI

Blockchain is a popular distributed ledger technology used for tracking cryptocurrency transactional data. It's also a valuable tool for creating a tamper-proof record that documents why an ML model made a particular prediction. That's why some companies are using blockchain technology to document their AI use.

With blockchain, each step in the development process -- including who made, tested and approved each decision -- is recorded in a human-readable format that can't be altered.

Responsible AI standardization

Top executives at large companies, such as IBM, have publicly called for AI regulations. In the U.S., no federal laws or standards have yet emerged, even with the recent boom in generative AI models, such as ChatGPT. However, the EU AI Act of 2024 provides a framework to root out high-risk AI systems and protect sensitive data from misuse by such systems.

The U.S. has yet to pass federal legislation governing AI, and there are conflicting opinions on whether AI regulation is on the horizon. However, both NIST and the Biden administration have published broad guidelines for the use of AI. NIST has issued its Artificial Intelligence Risk Management Framework. The Biden administration has published blueprints for an AI Bill of Rights, an AI Risk Management Framework and a roadmap for creating a National AI Research Resource.

In March 2024, the European Parliament ratified the EU AI Act, which includes a regulatory framework for responsible AI practices. It takes a risk-based approach, dividing AI applications into four different categories: minimal risk, limited risk, high risk and unacceptable risk. High and unacceptable risk AI systems require mitigation over time. This law applies to both EU and non-EU that handle EU citizens' data.

The EU AI Act has clear implications for how businesses implement and use AI in the real world. Learn what you need to know about complying with the EU AI Act.

29 Oct 2024

All Rights Reserved, Copyright 2018 - 2025, TechTarget | Read our Privacy Statement