putilov_denis - stock.adobe.com

Tip

What is AI governance and why do you need it?

AI governance is a new discipline given the recent expansion of AI. It's different from standard IT governance practices in that it's concerned with the responsible use of AI.

AI governance is an overarching framework that manages an organization's use of AI with a large set of processes, methodologies and tools.

The goal of AI governance is not limited to just ensuring effective uses of AI. In fact, the scope is much larger and encompasses risk management, regulatory compliance and ethical usage of AI.

It's important to note the distinction between AI governance and AI regulation. AI regulation refers to the laws and rules made by a government or regulator regarding AI that apply to all organizations that fall under their purview. AI governance instead refers to how AI is managed in an organizational context. 

Pros and cons of deep learning

Organizations already have mature IT governance practices. So, why do they need AI governance? AI governance may share some practices with IT governance, but it's a distinct discipline, particularly at this early stage of AI adoption and maturity.

In popular parlance, AI refers to deep learning, or machine learning approaches that rely on artificial neural networks given their preponderance. The central idea of deep learning is that decision-making rules are derived from data and not hardcoded by humans, which is the norm in traditional IT systems. Dramatic improvements in accuracy and near human-like performance are observed when deep learning performs narrowly defined tasks in areas such as language processing, image recognition and speech recognition.

Automated decision-making systems that use such AI capabilities are becoming nearly ubiquitous. Algorithms decide people's shopping suggestions, news feeds, job applications, credit decisions, healthcare recommendations and more. AI and the automation it enables have great benefits from a business point of view, but there are downsides to consider as well. Unlike hardcoded rules, the "why" behind a deep learning decision is neither intuitive nor easily understood. Hence, the reference to AI being a black box.

There are other limitations besides a lack of transparency:

  • Things change in the real world all the time, and the patterns or relationships an AI system learned may no longer be applicable.
  • Real-world data is often different from the data used to train AI models.
  • AI models work well only for certain sets of audiences -- not all. This is referred to as AI bias or algorithmic bias.

In all of these scenarios, automated decisions are likely incorrect, but organizations continue to rely on them without fixing their algorithms.

AI governance may share some practices with IT governance, but it's a distinct discipline, particularly at this early stage of AI adoption and maturity.

The need for AI governance

As AI adoption increases, there is growing recognition of both its strengths and limitations. Governments are introducing new regulations and guidelines to prevent the harms caused by both intentional and unintentional misuse of AI. Incorrect usage of AI can expose an organization to operational, financial, regulatory and reputational risks. It's also unlikely to be aligned with your organization's core values. The unique nature of AI requires guardrails to be put in place to ensure that AI works as intended. That's the key mandate for AI governance. 

After a few years of experience with implementing and scaling deep learning in the enterprise, AI governance playbooks and best practices are beginning to emerge now. Some prominent examples include the following:

  • Pharmaceutical company Novartis, which has engaged a multidisciplinary team of experts to examine their use of AI systems throughout the pharmaceutical value chain and craft their position on using AI responsibly and ethically in a way that is aligned to their overall company code of ethics;
  • IEEE, the world's largest technical professional organization for the advancement of technology, which created the Ethically Aligned Design business standards, covering the gamut from the need for AI ethics in business to the required skills and staffing;
  • The Montreal AI Ethics Institute, a nonprofit organization, which regularly produces "State of AI ethics reports" and helps democratize access to AI ethics knowledge; and
  • The Singapore government, which has been a pioneer and released the Model AI Governance Framework to provide actionable guidance for the private sector on how to address ethical and governance issues in AI deployments.

AI governance is not the job of software engineers or machine learning experts alone. It is multidisciplinary, involving both technical and non-technical stakeholders.

AI governance is relevant to end users in both the public and private sectors, as well as to AI software vendors. A few progressive organizations are even making AI governance an integral part of their corporate governance and environmental, social and governance strategies, because it entails how an organization should implement AI ethics principles and ensures responsible use of AI.

Dig Deeper on Enterprise applications of AI

Business Analytics
CIO
Data Management
ERP
Close