Maksim Kabakou - stock.adobe.com
It's 2020. Modern-day AI has been around for years now, and enterprises are continuing to automate and augment their business processes with AI technology.
As enterprises become more comfortable using AI technologies, they are turning their efforts from adopting them to making them reliable and safe to use. To do so, enterprises are beginning to focus on AI ethics.
Ethical guidelines for data and privacy aren't new to AI, noted Ed McLaughlin, president of operations and technology at Mastercard. While the practices and capabilities of AI are relatively recent, the principles of ethics, security and responsibility have long been established.
"Ethics aren't situational," McLaughlin said. "You shouldn't have to think about what you believe just because you have new capabilities."
Enterprises should build explainability, privacy and security into their models from the start. Companies need to ensure they are benefitting consumers who entrust them with their data and lay out easy-to-read, understandable data privacy policies, McLaughlin said.
Enterprises that don't consider ethics can open themselves up to legal, public relations and even model accuracy risks. AI ethics, then, can help mitigate risks, said Beena Ammanath, executive director of the Deloitte AI Institute.
Ammanath helped produce Deloitte's 2020 "State of AI in the Enterprise" report, which surveyed 2,737 IT and line-of-business executives in nine countries. Most respondents ranked managing AI-related risks as a top challenge for their AI initiatives, with many reporting "major" or "extreme" concerns regarding potential strategic, operational and ethical risks in AI.
About 56% of responders said their organization is slowing AI adoption due to emerging risks in AI, including project failures, misuse of personal data, ethical problems and regulatory uncertainty. The same percentage of responders said they believe public perceptions will either slow or stop the adoption of some AI technologies.
Business executives and policymakers also have a firmer grasp of AI technologies than they did even a few years ago, and enterprises have significantly increased their rate of AI adoption over the past few years.
Policymakers and enterprises have a trust gap, according to a survey of 71 policymakers and more than 280 global organizations conducted by global professional services network EY in collaboration with The Future Society.
Policymakers vs. enterprises
As for regulation of AI, policymakers mostly don't trust the intentions of enterprises.
Nigel DuffyGlobal AI leader, EY
While most enterprises think self-regulation of AI by industry is better than government regulation, most government policymakers disagree. Enterprises also primarily see themselves as investing in ethical AI, even as it reduces profits. Lawmakers, however, appear to be increasingly unwilling to let business regulate AI by itself.
Both enterprises and lawmakers need to work together to bridge this trust gap, said Nigel Duffy, EY global AI leader. They should look forward to emerging AI ethical risks, such as privacy risks with facial recognition, human emotion analysis and home assistants.
Still, Duffy noted, among both enterprises and policymakers, "there is an increasing realization that a framework is needed."
The groups will likely increase their focus on emerging AI ethical risks over the next two years, he said.
Enterprises should work more with policymakers and regulators to create more and better AI ethical standards.
Mastercard formed a group of regulators, business executives and technology professionals, to help guide the credit card giant's AI practices.
"That was very, very helpful for us," McLaughlin said.