TechTarget.com/searchenterpriseai

https://www.techtarget.com/searchenterpriseai/definition/AI-governance

What is artificial intelligence (AI) governance?

By Alexander S. Gillis

Artificial intelligence governance is the legal framework for ensuring AI and machine learning technologies are researched and developed with the goal of helping humanity adopt and use these systems in ethical and responsible ways. AI governance aims to close the gap that exists between accountability and ethics in technological advancement.

AI use is rapidly increasing across nearly all industries, including healthcare, transportation, retail, financial services, education and public safety. As a result, governance has taken on a significant role and is getting increased attention.

The main focus of AI governance is on AI -- in terms of how it relates to justice, data quality and autonomy. Overall, AI governance determines how much of daily life algorithms can shape and who monitors how AI functions. Some key areas governance addresses include the following:

Why is AI governance needed?

AI governance is necessary when machine learning algorithms are used to make decisions -- especially when those decisions can negatively affect humans. For example, AI governance determines how to best handle scenarios where AI-based decisions could be unjust or violate human rights. Machine learning biases, particularly in terms of racial profiling, can incorrectly identify basic information about users. This can result in unfairly denying individuals access to healthcare and loans, or misleading law enforcement in identifying criminal suspects.

The rapid adoption of AI tools, systems and technologies in various industries has also raised concerns about AI ethics, transparency and compliance with other regulations -- such as the General Data Protection Regulation (GDPR). Without proper governance, AI systems could pose risks such as biased decision-making, privacy violations and misuse of data. AI governance seeks to facilitate the constructive use of AI technologies while protecting user rights and preventing harm.

AI governance pillars

The White House's Office of Science and Technology has made a Blueprint for an AI Bill of Rights that outlines a collection of five guiding principles and practices for the design, deployment and usage of AI systems. The goal of this blueprint is to help protect the rights of the American public regarding the use of AI. These principles include the following:

Some other components of a strong AI governance framework include the following:

How organizations should approach AI governance

There are many actions an organization can take to implement effective and sustainable AI governance practices. They include the following:

What is AI model governance?

AI model governance is a subset of AI governance that specifically entails how organizations should develop and use AI and machine learning models safely and responsibly. Organizations that develop and use these models must have the following considerations in mind:

The future of AI governance

The future of AI governance depends on collaboration among governments, organizations and stakeholders. Its success hinges on developing comprehensive AI policies and regulations that protect the public while fostering innovation. Complying with data governance rules and privacy regulations, as well as prioritizing safety, trustworthiness and transparency, are also important to the future of AI governance.

Various companies are focused on the future of AI governance. For instance, in 2022, Microsoft released version 2 of its "Responsible AI Standard," a guide for organizations managing AI risks and incorporating ethical AI governance into their strategies. Other companies that have committed to implementing governance standards and guardrails include Amazon, Anthropic, Google, IBM and Inflection.

U.S. government organizations working in this area include the White House Office of Science and Technology Policy's National Artificial Intelligence Initiative Office, which launched in 2021. The National Artificial Intelligence Advisory Committee was created in 2022 as part of the National AI Initiative to advise the president on AI-related issues. Also, in collaboration with both the public and private sector, the National Institute of Standards and Technology has developed a framework that recommends certain risk management approaches to those working with AI.

Some AI experts still insist that a gap exists in the legal framework of AI accountability and integrity, however. In March 2023, technology leaders and AI experts such as Elon Musk and Steve Wozniak signed an open letter urging a temporary halt to AI research and the codifying of legal regulations. In May, the CEO of OpenAI, Sam Altman, testified before Congress urging AI regulation. Elon

Musk and OpenAI have more recently, in 2024, found themselves in controversy for Musk's Grok generative AI's location, environmental impact and its operation on fewer ethical boundaries, while OpenAI has reportedly been putting profit over safety.

Other such companies have also been pushing the boundaries regarding AI governance. Adobe, for example, updated its terms of service in 2024 to allow the company access to user-generated content to train its machine learning software. After a large amount of backlash, Adobe backtracked and updated its terms of service agreement, pledging not to train its AI on user content.

As AI adoption continues to increase and the technology improves, companies are likely to continue to push ethical boundaries related to AI in their product offerings, and proper implementation of AI governance will become more and more relevant. This means the AI field will likely see more public calls for regulatory oversight. The White House's Blueprint for an AI Bill of Rights is a step in this direction, but it lacks concrete details on how each principle should be implemented and doesn't require organizations to follow it.

AI governance is the responsible regulation of artificial intelligence in the private and public sectors. Learn what businesses need to know about AI regulation.

03 Jan 2025

All Rights Reserved, Copyright 2018 - 2025, TechTarget | Read our Privacy Statement