https://www.techtarget.com/searchenterpriseai/tip/How-AI-ethics-is-the-cornerstone-of-governance
Ethical discourse does not traditionally intersect with building software, which tends to be more mundane, yet AI systems present a new challenge that forces ethics into the conversation.
AI provides high levels of accuracy when businesses use the technology appropriately, but users can encounter low accuracy levels and unreliability in some contexts. AI bias, which refers to a consistently higher rate of errors when it comes to certain groups, exemplifies this. For instance, the error rate for a facial recognition system can be higher for darker skin shades. AI ethics focuses on understanding and mitigating these types of failure points in AI systems.
The last few years have seen the proliferation of AI ethics principles and guidelines. Public sector agencies, AI vendors, research bodies, think tanks, academic institutions and consultancies have all come up with their own versions. They can all be distilled into four core principles: fairness, accountability, transparency and safety.
AI ethics is within the scope of AI governance, a broader concept, and the following four principles explain why:
AI ethics provides valuable inputs for an organization's AI strategy. It gives an organization a handle on the acceptable use of AI and even determines whether an AI system is fit for specific purposes. AI ethics principles also provide clarity on design, data, documentation, testing and monitoring requirements. These principles are relevant throughout the entire AI lifecycle.
When adopting a broad AI governance strategy, it's important to prioritize AI ethics and allocate sufficient budget and resources. Organizations are typically adept at certain routines and processes such as budget appropriations, technology procurement and hiring, but so far they are not proficient in translating AI ethics principles into action items. Ensuring that this happens is an important part of AI governance.
Implementing AI ethics standards is also referred to as responsible AI. The bare minimum goal of responsible AI is to do just enough to comply with any applicable regulations.
The basic premise behind responsible AI is that it's the right thing to do. Responsible AI aligns with the corporate mission of being a force for good that many organizations share. These organizations are also holding themselves to higher standards through environmental, social and governance (ESG) initiatives, and responsible AI squarely aligns with ESG. In this time of the "Great Resignation," it can help an organization attract talent.
Responsible AI is also about not doing the wrong thing, such as falling afoul with regulatory requirements or unwittingly adding or amplifying existing inequities in our society.
There is a sentiment that the business case for AI ethics is a bit like the business case for antivirus and information security programs. They are essential costs of doing business but do not generate financial or business returns.
However, beyond the 'cost of doing AI' argument, there is also a tangible business case to be made for responsible AI.
In the United States alone, experts estimate the spending on AI technology to reach $120 billion by 2025. For the reasons outlined above, responsible AI ensures that an organization's AI investments deliver the expected ROI.
Thus, any strategy for AI governance must include not just business and technical components but also ethical considerations and careful analyses of the impacts of AI.
01 Apr 2022