Browse Definitions :
How to formulate a winning AI strategy How to become an artificial intelligence engineer

AI ethics (AI code of ethics)

What are ethics in AI?

AI ethics is a system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence technology. As AI has become integral to products and services, organizations are starting to develop AI codes of ethics.

An AI code of ethics, also called an AI value platform, is a policy statement that formally defines the role of artificial intelligence as it applies to the continued development of the human race. The purpose of an AI code of ethics is to provide stakeholders with guidance when faced with an ethical decision regarding the use of artificial intelligence.

Isaac Asimov, the science fiction writer, foresaw the potential dangers of autonomous AI agents long before their development and created The Three Laws of Robotics as a means of limiting those risks. In Asimov's code of ethics, the first law forbids robots from actively harming humans or allowing harm to come to humans by refusing to act. The second law orders robots to obey humans unless the orders are not in accordance with the first law. The third law orders robots to protect themselves insofar as doing so is in accordance with the first two laws.

The rapid advancement of AI in the past five to 10 years has spurred groups of experts to develop safeguards for protecting against the risk of AI to humans. One such group is the nonprofit institute founded by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn and DeepMind research scientist Victoria Krakovna. The institute worked with AI researchers and developers as well as scholars from many disciplines to create the 23 guidelines now referred to as the Asilomar AI Principles.

Kelly Combs, director at Digital Lighthouse, KPMG said that when developing an AI code of ethics "it's imperative to include clear guidelines on how the technology will be deployed and continuously monitored." These policies should mandate measures that guard against unintended bias in machine-learning algorithms, continuously detect drift in data and algorithms, and track both the provenance of data and the identity of those who train algorithms.

Asilomar AI Principles
AI experts and scholars from many disciplines created 23 guidelines now referred to as the Asilomar AI Principles.

Why are AI ethics important?

AI is a technology designed by humans to replicate, augment or replace human intelligence. These tools typically rely on large volumes of various types of data to develop insights. Poorly designed projects built on data that is faulty, inadequate or biased can have unintended, potentially harmful, consequences. Moreover, the rapid advancement in algorithmic systems means that in some cases it is not clear to us how the AI reached its conclusions, so we are essentially relying on systems we can't explain to make decisions that could affect society.

An AI ethics framework is important because it shines a light on the risks and benefits of AI tools and  establishes guidelines for its responsible use. Coming up with a system of moral tenets and techniques for using AI responsibly requires the industry and interested parties to examine major social issues and ultimately the question of what makes us human.

What are the ethical challenges of AI?

Enterprises face several ethical challenges in their use of AI technology.

  • Explainability. When AI systems go awry, teams need to be able to trace through a complex chain of algorithmic systems and data processes to find out why. Organizations using AI should be able to explain the source data, resulting data, what their algorithms do and why they are doing that. "AI needs to have a strong degree of traceability to ensure that if harms arise, they can be traced back to the cause," said Adam Wisniewski, CTO and co-founder of AI Clearing. 
  • Responsibility. Society is still sorting out responsibility when decisions made by AI systems have catastrophic consequences, including loss of capital, health or life. Responsibility for the consequences of AI-based decisions needs to be sorted out in a process that includes lawyers, regulators and citizens. One challenge is finding the appropriate balance in cases where an AI system may be safer than the human activity it is duplicating but still causes problems, such as weighing the merits of autonomous driving systems that cause fatalities but far fewer than people do.
  • Fairness. In data sets involving personally identifiable information, it is extremely important to ensure that there are no biases in terms of race, gender or ethnicity.
  • Misuse. AI algorithms may be used for purposes other than those for which they were created. Wisniewski said these scenarios should be analyzed at the design stage to minimize the risks and introduce safety measures to reduce the adverse effects in such cases.

What are the benefits of ethical AI?

The rapid acceleration in AI adoption across businesses has coincided with -- and in many cases helped fuel -- two major trends: the rise of customer-centricity and the rise in social activism.

"Businesses are rewarded not only for providing personalized products and services but also for upholding customer values and doing good for the society in which they operate," said Sudhir Jha, senior vice president and head of the Brighterion unit at Mastercard.

AI plays a huge role in how consumers interact with and perceive a brand. Responsible use is necessary to ensure a positive impact. In addition to consumers, employees want to feel good about the businesses they work for. "Responsible AI can go a long way in retaining talent and ensuring smooth execution of a company's operations," Jha said.

AI code of ethics perspectives
As AI and machine learning are becoming central to IT systems, companies must ensure their use of AI is ethical.

What is an AI code of ethics?

A proactive approach to ensuring ethical AI requires addressing three key areas, according to Jason Shepherd, vice president of ecosystem at Zededa, an edge AI tools provider.

  • Policy. This includes developing the appropriate framework for driving standardization and establishing regulations. Efforts like the Asilomar AI Principles are essential to start the conversation, and there are several efforts spinning up around policy in Europe, the U.S. and elsewhere. Ethical AI policies also need to address how to deal with legal issues when something goes wrong. Companies may incorporate AI policies into their own code of conduct. But effectiveness will depend on employees following the rules, which may not always be realistic when money or prestige are on the line.
  • Education. Executives, data scientists, front-line employees and consumers all need to understand policies, key considerations and potential negative impacts of unethical AI and fake data. One big concern is the tradeoff between ease of use around data sharing and AI automation and the potential negative repercussions of oversharing or adverse automations. "Ultimately, consumers' willingness to proactively take control of their data and pay attention to potential threats enabled by AI is a complex equation based on a combination of instant gratification, value, perception and risk," Shepherd said.
  • Technology. Executives also need to architect AI systems to automatically detect fake data and unethical behavior. This requires not just looking at a company's own AI but vetting suppliers and partners for the malicious use of AI. Examples include the deployment of deep fake videos and text to undermine a competitor, or the use of AI to launch sophisticated cyberattacks. This will become more of an issue as AI tools become commoditized. To combat this potential snowball effect, organizations need to invest in defensive measures rooted in open, transparent and trusted AI infrastructure. Shepherd believes this will give rise to the adoption of trust fabrics that provide a system-level approach to automating privacy assurance, ensuring data confidence and detecting unethical use of AI.

Examples of AI codes of ethics

An AI code of ethics can spell out the principles and provide the motivation that drives appropriate behavior. For example, Mastercard's Jha said he is currently working with the following tenets to help develop the company's current AI code of ethics:

  • An ethical AI system must be inclusive, explainable, have a positive purpose and use data responsibly.
  • An inclusive AI system is unbiased and works equally well across all spectra of society. This requires full knowledge of each data source used to train the AI models in order to ensure no inherent bias in the data set. It also requires a careful audit of the trained model to filter any problematic attributes learned in the process. And the models need to be closely monitored to ensure no corruption occurs in the future as well.
  • An explainable AI system supports the governance required of companies to ensure the ethical use of AI. It is hard to be confident in the actions of a system that cannot be explained. Attaining confidence might entail a tradeoff in which a small compromise in model performance is made in order to select an algorithm that can be explained.
  • An AI system endowed with a positive purpose aims to, for example, reduce fraud, eliminate waste, reward people, slow climate change, cure disease, etc. Any technology can be used for doing harm, but it is imperative that we think of ways to safeguard AI from being exploited for bad purposes. This will be a tough challenge, but given the wide scope and scale of AI, the risk of not addressing this challenge and misusing this technology is far greater than ever before.
  • An AI system that uses data responsibly observes data privacy rights. Data is key to an AI system, and often more data results in better models. However, it is critical that in the race to collect more and more data, people's right to privacy and transparency isn't sacrificed. Responsible collection, management and use of data is essential to creating an AI system that can be trusted. In an ideal world, data should only be collected when needed, not continuously, and the granularity of data should be as narrow as possible. For example, if an application only needs zip code-level geolocation data to provide weather prediction, it shouldn't collect the exact location of the consumer. And the system should routinely delete data that is no longer required.

The future of ethical AI

Some argue that an AI code of ethics can quickly become out of date and that a more proactive approach is required to adapt to a rapidly evolving field. Arijit Sengupta, founder and CEO of Aible, an AI development platform, said, "The fundamental problem with an AI code of ethics is that it's reactive, not proactive. We tend to define things like bias and go looking for bias and trying to eliminate it -- as if that's possible."

A reactive approach can have trouble dealing with bias embedded in the data. For example, if women have not historically received loans at the appropriate rate, that will get woven into the data in multiple ways. "If you remove variables related to gender, AI will just pick up other variables that serve as a proxy for gender," Sengupta said.

He believes the future of ethical AI needs to talk about defining fairness and societal norms. So, for example, at a lending bank, management and AI  teams would need to decide whether they want to aim for equal consideration (e.g., loans processed at an equal rate for all races), proportional results (success rate for each race is relatively equal) or equal impact (ensuring a proportional amount of loans goes to each race). The focus needs to be on a guiding principle rather than on something to avoid, Sengupta argued.

Most people would agree that it is easier and more effective to teach children what their guiding principles should be rather than to list out every possible decision they might encounter and tell them what to do and what not to do. "That's the approach we're taking with AI ethics," Sengupta said. "We are telling a child everything it can and cannot do instead of providing guiding principles and then allowing them to figure it out for themselves."

For now, we have to turn to humans to develop rules and technologies that promote responsible AI. Shepherd said this includes programming products and offers that protect human interests and are not biased against certain groups, such as minority groups, those with special needs and the poor. The latter is especially concerning as AI has the potential to spur massive social and economic warfare, furthering the divide between those who can afford technology (including human augmentation) and those who cannot.

Down the road, we also need to plan for the unethical use of AI by bad actors. Today's AI systems range from fancy rules engines to machine learning models that automate simple tasks. "It may be decades before more sentient AIs begin to emerge that can automate their own unethical behavior at a scale that humans wouldn't be able to keep up with," Shepherd said.

This was last updated in January 2023

Next Steps

Why you need an AI ethics committee

Continue Reading About AI ethics (AI code of ethics)

  • remote infrastructure management

    Remote infrastructure management, or RIM, is a comprehensive approach to handling and overseeing an organization's IT ...

  • port address translation (PAT)

    Port address translation (PAT) is a type of network address translation (NAT) that maps a network's private internal IPv4 ...

  • network fabric

    'Network fabric' is a general term used to describe underlying data network infrastructure as a whole.

  • digital innovation

    Digital innovation is the adoption of modern digital technologies by a business.

  • business goals

    A business goal is an endpoint, accomplishment or target an organization wants to achieve in the short term or long term.

  • vertical SaaS (software as a service)

    Vertical SaaS describes a type of software as a service solution created for a specific industry, such as retail, financial ...

  • employee onboarding and offboarding

    Employee onboarding involves all the steps needed to get a new employee successfully deployed and productive, while offboarding ...

  • skill-based learning

    Skill-based learning develops students through hands-on practice and real-world application.

  • gamification

    Gamification is a strategy that integrates entertaining and immersive gaming elements into nongame contexts to enhance engagement...

Customer Experience
  • Microsoft Dynamics 365

    Dynamics 365 is a cloud-based portfolio of business applications from Microsoft that are designed to help organizations improve ...

  • Salesforce Commerce Cloud

    Salesforce Commerce Cloud is a cloud-based suite of products that enable e-commerce businesses to set up e-commerce sites, drive ...

  • Salesforce DX

    Salesforce DX, or SFDX, is a set of software development tools that lets developers build, test and ship many kinds of ...