Getty Images/iStockphoto

Tip

How AI ethics is the cornerstone of governance

The concept of AI ethics ensures that AI systems provide accuracy and reliability. Businesses will benefit from adopting AI ethics strategies of their own.

Ethical discourse does not traditionally intersect with building software, which tends to be more mundane, yet AI systems present a new challenge that forces ethics into the conversation.

AI provides high levels of accuracy when businesses use the technology appropriately, but users can encounter low accuracy levels and unreliability in some contexts. AI bias, which refers to a consistently higher rate of errors when it comes to certain groups, exemplifies this. For instance, the error rate for a facial recognition system can be higher for darker skin shades. AI ethics focuses on understanding and mitigating these types of failure points in AI systems.

Core principles of AI ethics

The last few years have seen the proliferation of AI ethics principles and guidelines. Public sector agencies, AI vendors, research bodies, think tanks, academic institutions and consultancies have all come up with their own versions. They can all be distilled into four core principles: fairness, accountability, transparency and safety.

AI ethics is within the scope of AI governance, a broader concept, and the following four principles explain why:

  • Fairness ensures that an AI system is not "biased" but works equally well for all user segments.
  • Accountability relates to identifying who is responsible during the different stages of the AI lifecycle and ensuring human oversight and controls.
  • Transparency leads to both the adoption and trust of AI, and ultimately the success of AI projects, when humans are able to understand, interpret and explain the "why" of AI decisions.
  • Safety ensures that adequate controls exist to secure AI systems.
AI ethics principles provide clarity on design, data, documentation, testing and monitoring requirements. These principles are relevant throughout the entire AI lifecycle.

AI ethics provides valuable inputs for an organization's AI strategy. It gives an organization a handle on the acceptable use of AI and even determines whether an AI system is fit for specific purposes. AI ethics principles also provide clarity on design, data, documentation, testing and monitoring requirements. These principles are relevant throughout the entire AI lifecycle.

When adopting a broad AI governance strategy, it's important to prioritize AI ethics and allocate sufficient budget and resources. Organizations are typically adept at certain routines and processes such as budget appropriations, technology procurement and hiring, but so far they are not proficient in translating AI ethics principles into action items. Ensuring that this happens is an important part of AI governance.

Goals of implementing AI ethics standards

Implementing AI ethics standards is also referred to as responsible AI. The bare minimum goal of responsible AI is to do just enough to comply with any applicable regulations.

The basic premise behind responsible AI is that it's the right thing to do. Responsible AI aligns with the corporate mission of being a force for good that many organizations share. These organizations are also holding themselves to higher standards through environmental, social and governance (ESG) initiatives, and responsible AI squarely aligns with ESG. In this time of the "Great Resignation," it can help an organization attract talent.

Responsible AI is also about not doing the wrong thing, such as falling afoul with regulatory requirements or unwittingly adding or amplifying existing inequities in our society.

AI bias
AI bias illustration

A business case for AI ethics

There is a sentiment that the business case for AI ethics is a bit like the business case for antivirus and information security programs. They are essential costs of doing business but do not generate financial or business returns.

However, beyond the 'cost of doing AI' argument, there is also a tangible business case to be made for responsible AI.

  • Many AI projects do not end up being deployed in production because their limitations are discovered much later in the lifecycle.
  • Responsible AI helps an organization better serve its customers.
  • Responsible AI is an integral part of good risk management practices as potential AI risks are well understood and mitigation plans are put in place.
  • Responsible AI minimizes the odds of 'AI gone wrong' scenarios that damage your organization's reputation.
  • AI vendors can use Responsible AI as a differentiator for their technology as AI procurement guidelines increasingly use Responsible AI as a criterion.

In the United States alone, experts estimate the spending on AI technology to reach $120 billion by 2025. For the reasons outlined above, responsible AI ensures that an organization's AI investments deliver the expected ROI.

Thus, any strategy for AI governance must include not just business and technical components but also ethical considerations and careful analyses of the impacts of AI.

Next Steps

New AI ethics advisory board will deal with challenges

Dig Deeper on Enterprise applications of AI

Business Analytics
CIO
Data Management
ERP
Close