https://www.techtarget.com/searchenterpriseai/definition/responsible-AI
Responsible AI is an approach to developing and deploying artificial intelligence (AI) from both an ethical and legal point of view. The goal of responsible AI is to use AI in a safe, trustworthy and ethical fashion. Using AI responsibly should increase transparency and fairness, as well as reduce issues such as AI bias.
Proponents of responsible artificial intelligence believe a widely adopted governance framework of AI best practices makes it easier for organizations to ensure their AI programming is human-centered, interpretable and explainable. However, ensuring trustworthy AI is up to the data scientists and software developers who write and deploy an organization's AI models. This means the steps required to prevent discrimination and ensure transparency vary from company to company.
Implementation also differs among companies. For example, the chief analytics officer or other dedicated AI officers and teams might be responsible for developing, implementing and monitoring the organization's responsible AI framework. Organizations should have an explanation on their website of their AI framework that documents its accountability and ensures the organization's AI use isn't discriminatory.
Responsible AI is an emerging area of AI governance. The use of the word responsible is an umbrella term that covers both ethics and AI democratization.
Often, the data sets used to train machine learning (ML) models used in AI introduce bias. Bias gets into these models in one of two ways: incomplete or faulty data, or from the biases of those training the ML model. When an AI program is biased, it can end up negatively affecting or hurting humans. For example, it can unfairly decline applications for financial loans or, in healthcare, inaccurately diagnose a patient.
As software programs with AI features become more common, it's apparent there's a need for standards in AI beyond the three laws of robotics established by science fiction writer Isaac Asimov.
The implementation of responsible AI can help reduce bias, create more transparent AI systems and increase user trust in those systems.
AI and machine learning models should follow a list of principles that might differ from organization to organization.
For example, Microsoft and Google both follow their own list of principles. In addition, the National Institute of Standards and Technology (NIST) has published a 1.0 version of its Artificial Intelligence Risk Management Framework that follows many of the same principles found in Microsoft and Google's lists. NIST's seven principles include the following:
AI models should be created with concrete goals that focus on building a model in a safe, trustworthy and ethical way. Ongoing scrutiny is crucial to ensure an organization is committed to providing unbiased, trustworthy AI technology. To do this, an organization must follow a maturity model while designing and implementing an AI system.
At a base level, responsible AI is built around development standards that focus on the principles for responsible design. These company-wide AI development standards should include the following mandates:
An organization can implement responsible AI and demonstrate that it has created a responsible AI system in the following ways:
Responsible AI policies and frameworks aren't always easy to implement, because key challenges and concerns often slow down the process. These challenges include the following:
When designing responsible AI, development and deployment processes need to be systematic and repeatable. Some best practices include the following:
From an oversight perspective, organizations should have an AI governance policy that's reusable for each AI system they develop or implement. Governance policies for responsible AI should include the following best practices:
Among the companies pursuing responsible AI strategies and use cases are Microsoft, FICO and IBM.
Microsoft has created a responsible AI governance framework with help from its AI, Ethics and Effects in Engineering and Research Committee and Office of Responsible AI (ORA) groups. These two groups work together within Microsoft to spread and uphold responsible AI values.
ORA is responsible for setting company-wide rules for responsible AI through the implementation of governance and public policy work. Microsoft has implemented several responsible AI guidelines, checklists and templates, including the following:
Credit scoring organization FICO has created responsible AI governance policies to help its employees and customers understand how the ML models the company uses work and their limitations. FICO's data scientists are tasked with considering the entire lifecycle of its ML models. They're constantly testing their effectiveness and fairness. FICO has developed the following methodologies and initiatives for bias detection:
IBM has its own ethics board dedicated to issues surrounding artificial intelligence. The IBM AI Ethics Board is a central body that supports the creation of responsible and ethical AI throughout the company. Guidelines and resources IBM uses include the following:
Blockchain is a popular distributed ledger technology used for tracking cryptocurrency transactional data. It's also a valuable tool for creating a tamper-proof record that documents why an ML model made a particular prediction. That's why some companies are using blockchain technology to document their AI use.
With blockchain, each step in the development process -- including who made, tested and approved each decision -- is recorded in a human-readable format that can't be altered.
Top executives at large companies, such as IBM, have publicly called for AI regulations. In the U.S., no federal laws or standards have yet emerged, even with the recent boom in generative AI models, such as ChatGPT. However, the EU AI Act of 2024 provides a framework to root out high-risk AI systems and protect sensitive data from misuse by such systems.
The U.S. has yet to pass federal legislation governing AI, and there are conflicting opinions on whether AI regulation is on the horizon. However, both NIST and the Biden administration have published broad guidelines for the use of AI. NIST has issued its Artificial Intelligence Risk Management Framework. The Biden administration has published blueprints for an AI Bill of Rights, an AI Risk Management Framework and a roadmap for creating a National AI Research Resource.
In March 2024, the European Parliament ratified the EU AI Act, which includes a regulatory framework for responsible AI practices. It takes a risk-based approach, dividing AI applications into four different categories: minimal risk, limited risk, high risk and unacceptable risk. High and unacceptable risk AI systems require mitigation over time. This law applies to both EU and non-EU that handle EU citizens' data.
The EU AI Act has clear implications for how businesses implement and use AI in the real world. Learn what you need to know about complying with the EU AI Act.
29 Oct 2024