Browse Definitions :
How to build a winning AI strategy, explained by experts AI engineers: What they do and how to become one
X
Definition

AI ethics (AI code of ethics)

What are ethics in AI?

AI ethics is a system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence technology. As AI has become integral to products and services, organizations are starting to develop AI codes of ethics.

An AI code of ethics, also sometimes called an AI value platform, is a policy statement that formally defines the role of artificial intelligence as it applies to the development and well-being of the human race. The purpose of an AI code of ethics is to provide stakeholders with guidance when faced with an ethical decision regarding the use of artificial intelligence.

Isaac Asimov, the science fiction writer, foresaw the potential dangers of autonomous AI agents long before their development and created The Three Laws of Robotics as a means of limiting those risks. In Asimov's code of ethics, the first law forbids robots from actively harming humans or allowing harm to come to humans by refusing to act. The second law orders robots to obey humans unless the orders are not in accordance with the first law. The third law orders robots to protect themselves insofar as doing so is in accordance with the first two laws.

The rapid advancement of AI in the past five to 10 years has spurred groups of experts to develop safeguards for protecting against the risk of AI to humans. One such group is the nonprofit institute founded by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn and DeepMind research scientist Victoria Krakovna. The institute worked with AI researchers and developers as well as scholars from many disciplines to create the 23 guidelines now referred to as the Asilomar AI Principles.

Kelly Combs, managing director, KPMG US said that when developing an AI code of ethics "it's imperative to include clear guidelines on how the technology will be deployed and continuously monitored." These policies should mandate measures that guard against unintended bias in machine learning algorithms, continuously detect drift in data and algorithms, and track both the provenance of data and the identity of those who train algorithms.

List of Asilomar AI Principles.
AI experts and scholars from many disciplines created 23 guidelines now referred to as the Asilomar AI Principles.

Why are AI ethics important?

AI is a technology designed by humans to replicate, augment or replace human intelligence. These tools typically rely on large volumes of various types of data to develop insights. Poorly designed projects built on data that is faulty, inadequate or biased can have unintended, potentially harmful, consequences. Moreover, the rapid advancement in algorithmic systems means that in some cases it is not clear to us how the AI reached its conclusions, so we are essentially relying on systems we can't explain to make decisions that could affect society.

An AI ethics framework is important because it shines a light on the risks and benefits of AI tools and establishes guidelines for their responsible use. Coming up with a system of moral tenets and techniques for using AI responsibly requires the industry and interested parties to examine major social issues and ultimately the question of what makes us human.

What are the ethical challenges of AI?

Enterprises face several ethical challenges in their use of AI technologies.

  • Explainability. When AI systems go awry, teams need to be able to trace through a complex chain of algorithmic systems and data processes to find out why. Organizations using AI should be able to explain the source data, resulting data, what their algorithms do and why they are doing that. "AI needs to have a strong degree of traceability to ensure that if harms arise, they can be traced back to the cause," said Adam Wisniewski, CTO and co-founder of AI Clearing.
  • Responsibility. Society is still sorting out responsibility when decisions made by AI systems have catastrophic consequences, including loss of capital, health or life. The process of addressing accountability for the consequences of AI-based decisions should involve a range of stakeholders, including lawyers, regulators, AI developers, ethics bodies and citizens. One challenge is finding the appropriate balance in cases where an AI system may be safer than the human activity it is duplicating but still causes problems, such as weighing the merits of autonomous driving systems that cause fatalities but far fewer than people do.
  • Fairness. In data sets involving personally identifiable information, it is extremely important to ensure that there are no biases in terms of race, gender or ethnicity.
  • Misuse. AI algorithms may be used for purposes other than those for which they were created. Wisniewski said these scenarios should be analyzed at the design stage to minimize the risks and introduce safety measures to reduce the adverse effects in such cases.

The public release and rapid adoption of generative AI applications, such as ChatGPT and Dall-E, which are trained on existing content to generate new content, amplify the ethical issues related to AI, introducing risks related to misinformation, plagiarism, copyright infringement and harmful content.

List of ethical AI challenges and concerns.
Several issues need to be considered and overcome when devising and establishing an AI ethics framework.

What are the benefits of ethical AI?

The rapid acceleration in AI adoption across businesses has coincided with -- and in many cases helped fuel -- two major trends: the rise of customer-centricity and the rise in social activism.

"Businesses are rewarded not only for providing personalized products and services but also for upholding customer values and doing good for the society in which they operate," said Sudhir Jha, CEO and executive vice president, Bridgerton unit, Mastercard.

AI plays a huge role in how consumers interact with and perceive a brand. Responsible use is necessary to ensure a positive impact. In addition to consumers, employees want to feel good about the businesses they work for. "Responsible AI can go a long way in retaining talent and ensuring smooth execution of a company's operations," Jha said.

AI code of ethics perspectives.
As AI and machine learning are becoming central to IT systems, companies must ensure their use of AI is ethical.

What is an AI code of ethics?

A proactive approach to ensuring ethical AI requires addressing three key areas, according to Jason Shepherd, CEO at Nubix.

  • Policy. This includes developing the appropriate framework for driving standardization and establishing regulations. Documents like the Asilomar AI Principles can be useful to start the conversation. Government agencies in the United States, Europe and elsewhere have launched efforts to ensure ethical AI, and a raft of standards, tools and techniques from research bodies, vendors and academic institutions are available to help organizations craft AI policy. See "Resources for developing ethical AI" (below). Ethical AI policies will need to address how to deal with legal issues when something goes wrong. Companies should consider incorporating AI policies into their own codes of conduct. But effectiveness will depend on employees following the rules, which may not always be realistic when money or prestige are on the line.
  • Education. Executives, data scientists, front-line employees and consumers all need to understand policies, key considerations and potential negative impacts of unethical AI and fake data. One big concern is the tradeoff between ease of use around data sharing and AI automation and the potential negative repercussions of oversharing or adverse automations. "Ultimately, consumers' willingness to proactively take control of their data and pay attention to potential threats enabled by AI is a complex equation based on a combination of instant gratification, value, perception and risk," Shepherd said.
  • Technology. Executives also need to architect AI systems to automatically detect fake data and unethical behavior. This requires not just looking at a company's own AI but vetting suppliers and partners for the malicious use of AI. Examples include the deployment of deep fake videos and text to undermine a competitor, or the use of AI to launch sophisticated cyberattacks. This will become more of an issue as AI tools become commoditized. To combat this potential snowball effect, organizations need to invest in defensive measures rooted in open, transparent and trusted AI infrastructure. Shepherd believes this will give rise to the adoption of trust fabrics that provide a system-level approach to automating privacy assurance, ensuring data confidence and detecting unethical use of AI.

Examples of AI codes of ethics

An AI code of ethics can spell out the principles and provide the motivation that drives appropriate behavior. For example, Mastercard's Jha said he is currently working with the following tenets to help develop the company's current AI code of ethics:

  • An ethical AI system must be inclusive, explainable, have a positive purpose and use data responsibly.
  • An inclusive AI system is unbiased and works equally well across all spectra of society. This requires full knowledge of each data source used to train the AI models in order to ensure no inherent bias in the data set. It also requires a careful audit of the trained model to filter any problematic attributes learned in the process. And the models need to be closely monitored to ensure no corruption occurs in the future as well.
  • An explainable AI system supports the governance required of companies to ensure the ethical use of AI. It is hard to be confident in the actions of a system that cannot be explained. Attaining confidence might entail a tradeoff in which a small compromise in model performance is made in order to select an algorithm that can be explained.
  • An AI system endowed with a positive purpose aims to, for example, reduce fraud, eliminate waste, reward people, slow climate change, cure disease, etc. Any technology can be used for doing harm, but it is imperative that we think of ways to safeguard AI from being exploited for bad purposes. This will be a tough challenge, but given the wide scope and scale of AI, the risk of not addressing this challenge and misusing this technology is far greater than ever before.
  • An AI system that uses data responsibly observes data privacy rights. Data is key to an AI system, and often more data results in better models. However, it is critical that in the race to collect more and more data, people's right to privacy and transparency isn't sacrificed. Responsible collection, management and use of data is essential to creating an AI system that can be trusted. In an ideal world, data should only be collected when needed, not continuously, and the granularity of data should be as narrow as possible. For example, if an application only needs zip code-level geolocation data to provide weather prediction, it shouldn't collect the exact location of the consumer. And the system should routinely delete data that is no longer required.

Resources for developing ethical AI

Listed alphabetically, the following are a sampling of the growing number of organizations, policymakers and regulatory standards focused on developing ethical AI practices:

  1. AI Now Institute. Focuses on the social implications of AI and policy research in responsible AI. Research areas include algorithmic accountability, antitrust concerns, biometrics, worker data rights, large-scale AI models and privacy. The report "AI Now 2023 Landscape: Confronting Tech Power" provides a deep dive into many ethical issues that can be helpful in developing ethical AI policies.
  2. Berkman Klein Center for Internet & Society at Harvard University. Fosters research into the big questions related to the ethics and governance of AI. Research supported by the Berkman Klein Center has tackled topics that include information quality, algorithms in criminal justice, development of AI governance frameworks and algorithmic accountability.
  3. CEN-CENELEC Joint Technical Committee on Artificial Intelligence (JTC 21). An ongoing EU initiative for various responsible AI standards. The group plans to produce standards for the European market and inform EU legislation, policies and values. It also plans to specify technical requirements for characterizing transparency, robustness and accuracy in AI systems.
  4. Institute for Technology, Ethics and Culture (ITEC) Handbook. A collaborative effort between Santa Clara University's Markkula Center for Applied Ethics and the Vatican to develop a practical, incremental roadmap for technology ethics. The handbook includes a five-stage maturity model, with specific measurable steps that enterprises can take at each level of maturity. It also promotes an operational approach for implementing ethics as an ongoing practice, akin to DevSecOps for ethics. The core idea is to bring legal, technical and business teams together during ethical AI's early stages to root out the bugs at a time when they're much cheaper to fix than after responsible AI deployment.
  5. ISO/IEC 23894:2023 IT-AI-Guidance on risk management. The standard describes how an organization can manage risks specifically related to AI. It can help standardize the technical language characterizing underlying principles and how these principles apply to developing, provisioning or offering AI systems. It also covers policies, procedures and practices for assessing, treating, monitoring, reviewing and recording risk. It's highly technical and oriented toward engineers rather than business experts.
  6. NIST AI Risk Management Framework (AI RMF 1.0). A guide for government agencies and the private sector on managing new AI risks and promoting responsible AI. Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute, pointed to the depth of the NIST framework, especially its specificity in implementing controls and policies to better govern AI systems within different organizational contexts.
  7. Nvidia NeMo Guardrails. Provides a flexible interface for defining specific behavioral rails that bots need to follow. It supports the Colang modeling language. One chief data scientist said his company uses the open source toolkit to prevent a support chatbot on a lawyer's website from providing answers that might be construed as legal advice.
  8. Stanford Institute for Human-Centered Artificial Intelligence (HAI). Provides ongoing research and guidance into best practices for human-centered AI. One early initiative in collaboration with Stanford Medicine is Responsible AI for Safe and Equitable Health, which addresses ethical and safety issues surrounding AI in health and medicine.
  9. "Towards Unified Objectives for Self-Reflective AI." Authored by Matthias Samwald, Robert Praas and Konstantin Hebenstreit, the paper takes a Socratic approach to identify underlying assumptions, contradictions and errors through dialogue and questioning about truthfulness, transparency, robustness and alignment of ethical principles. One goal is to develop AI meta-systems in which two or more component AI models complement, critique and improve their mutual performance.
  10. World Economic Forum's "The Presidio Recommendations on Responsible Generative AI." Provides 30 action-oriented recommendations to navigate AI complexities and harness its potential ethically. This white paper also includes sections on responsible development and release of generative AI, open innovation and international collaboration, and social progress.

The future of ethical AI

Some argue that an AI code of ethics can quickly become out of date and that a more proactive approach is required to adapt to a rapidly evolving field. Arijit Sengupta, founder and CEO of Aible, an AI development platform, said, "The fundamental problem with an AI code of ethics is that it's reactive, not proactive. We tend to define things like bias and go looking for bias and trying to eliminate it -- as if that's possible."

A reactive approach can have trouble dealing with bias embedded in the data. For example, if women have not historically received loans at the appropriate rate, that will get woven into the data in multiple ways. "If you remove variables related to gender, AI will just pick up other variables that serve as a proxy for gender," Sengupta said.

He believes the future of ethical AI needs to talk about defining fairness and societal norms. So, for example, at a lending bank, management and AI teams would need to decide whether they want to aim for equal consideration (e.g., loans processed at an equal rate for all races), proportional results (success rate for each race is relatively equal) or equal impact (ensuring a proportional amount of loans goes to each race). The focus needs to be on a guiding principle rather than on something to avoid, Sengupta argued.

Most people would agree that it is easier and more effective to teach children what their guiding principles should be rather than to list out every possible decision they might encounter and tell them what to do and what not to do. "That's the approach we're taking with AI ethics," Sengupta said. "We are telling a child everything it can and cannot do instead of providing guiding principles and then allowing them to figure it out for themselves."

For now, we have to turn to humans to develop rules and technologies that promote responsible AI. Shephard said this includes programming products and offers that protect human interests and are not biased against certain groups, such as minority groups, those with special needs and the poor. The latter is especially concerning as AI has the potential to spur massive social and economic warfare, furthering the divide between those who can afford technology (including human augmentation) and those who cannot.

Society also urgently needs to plan for the unethical use of AI by bad actors. Today's AI systems range from fancy rules engines to machine learning models that automate simple tasks to generative AI systems that mimic human intelligence. "It may be decades before more sentient AIs begin to emerge that can automate their own unethical behavior at a scale that humans wouldn't be able to keep up with," Shephard said. But given the rapid evolution of AI, now is the time to develop the guardrails to prevent this scenario.

This was last updated in October 2023

Continue Reading About AI ethics (AI code of ethics)

Networking
  • What is wavelength?

    Wavelength is the distance between identical points, or adjacent crests, in the adjacent cycles of a waveform signal propagated ...

  • subnet (subnetwork)

    A subnet, or subnetwork, is a segmented piece of a larger network. More specifically, subnets are a logical partition of an IP ...

  • secure access service edge (SASE)

    Secure access service edge (SASE), pronounced sassy, is a cloud architecture model that bundles together network and cloud-native...

Security
  • What is exposure management?

    Exposure management is a cybersecurity approach to protecting exploitable IT assets.

  • intrusion detection system (IDS)

    An intrusion detection system monitors (IDS) network traffic for suspicious activity and sends alerts when such activity is ...

  • cyber attack

    A cyber attack is any malicious attempt to gain unauthorized access to a computer, computing system or computer network with the ...

CIO
  • What is a startup company?

    A startup company is a newly formed business with particular momentum behind it based on perceived demand for its product or ...

  • What is a CEO (chief executive officer)?

    A chief executive officer (CEO) is the highest-ranking position in an organization and responsible for implementing plans and ...

  • What is labor arbitrage?

    Labor arbitrage is the practice of searching for and then using the lowest-cost workforce to produce products or goods.

HRSoftware
  • organizational network analysis (ONA)

    Organizational network analysis (ONA) is a quantitative method for modeling and analyzing how communications, information, ...

  • HireVue

    HireVue is an enterprise video interviewing technology provider of a platform that lets recruiters and hiring managers screen ...

  • Human Resource Certification Institute (HRCI)

    Human Resource Certification Institute (HRCI) is a U.S.-based credentialing organization offering certifications to HR ...

Customer Experience
Close