Laurent - stock.adobe.com
Data privacy concerns and unease over closed, unexplainable AI models plague major tech companies near the mid-point of a year that has seen spectacular growth of AI in enterprises.
Regularly in 2019, news headlines relay the latest fine or lawsuit to organizations that misuse consumer data. Experts regularly blast machine learning models placed in unique positions of power for allegedly containing inherent biases in their data, and thus giving skewed results.
As enterprises move from experimenting with AI to deploying it, these dynamics serve as a warning, and a suggestion – to start establishing a comprehensive AI ethics framework as soon as this year.
Establish a framework for ethics
"Organizations are facing challenges today with governance of AI," said Irfan Saif, principal at Deloitte.
The software industry currently lacks governance standards for AI, he said. In a way, that's reminiscent of the confusion enterprises experienced when cloud technology first emerged, he continued.
Part of the problem lies with a lack of training and education.
"Education, and overall increasing the level of understanding of what AI is, is important," Saif said.
Irfan SaifPrincipal, Deloitte
Organizations may have employees with uneven degrees of AI expertise, leading to situations in which relatively inexperienced employees begin deploying AI across different departments. Business users in 2019 may lack the necessary understanding of what these technologies can, and can't, achieve, even as they spend money from IT budgets to attempt to automate workflows.
Some technology vendors take advantage of that lack of education to upsell their products or misstate their capabilities, according to analyst Alan Pelz-Sharpe Other vendors provide free training courses, though sometimes rudimentary, in which they can explain the basics of AI while still peddling their products.
DataRobot, a Boston-based analytics and machine learning vendor, offers several courses on AI ethics, for example.
One, which the company calls AI Interpreter, explains to business users the basics of AI technologies, and how these technologies can fit into a business environment.
"It's not just getting the math right, you have to make sure what you build is consistent with the business rules, the regulatory environment and the organization's values," said Colin Priest, senior director of product marketing at DataRobot.
The course, available to the general public, claims to offer that.
DataRobot also maintains a website that helps organizations create an AI ethics framework statement. Users need to fill out a series of some 20 questions, after which the site will craft a statement that denotes general data privacy and AI ethics guidelines.
AI ethics is "a really hot topic at the moment," Priest said, and customers appear to be interested in creating AI guidelines.
Importance of education
Once organizations reach a certain level of understanding, they can start to develop AI ethics frameworks to shape their overall policies for data and explainable AI.
For enterprises looking to develop AI ethics frameworks in 2019, they should begin investing in training and education now, and then should focus on building out an AI governance team, Saif said.
Forming an AI governance team
"The first thing to do is to put together a team of leaders that represent the breadth of the organization," Saif said. Team members should come from each part of the organization, and the team should examine how the organization uses AI and opportunities for deploying more AI.
The team should create a strategy that covers what AI techniques and approaches will be most impactful to an organization and in which areas. With a plan in place, an organization can then determine how to effectively monitor, use and protect data used to fuel AI algorithms.
Enterprises should create a policy of "least privileged access," giving access to data only to the people and algorithms that need it, Saif said.
Enterprises, especially those that build and deploy their own AI technologies, can also work on ensuring their models are explainable.
"Machine learning will always be a bit of a black box," Edson Tirelli, a principal software engineer at open source software vendor Red Hat, said. It can be hard to show why an AI model made a specific decision, but, by adhering to certain standards, those decisions can become easier to understand.
Looking to standards
Red Hat, along with many other technology vendors, adheres to a number of open-source standards for developing explainable AI, including Decision Modeling and Notation (DMN), Business Process Modeling Notation (BPMN) and Predictive Model Markup Language (PMML).
The Object Management Group, an international nonprofit technology standards consortium, maintains DMN and BPMN, while the Data Mining Group, a data mining standards consortium, maintains PMML. The standards, developed over the last several years, are updated regularly.
Together, the standards can help define different analytics and AI models and make them explainable across organizations.
"Standards do not require you to have a tool that really uses the standards per se," said Tirelli.
In the case of PMML, for example, organizations can save and export a model in PMML, rather than having to build a model with PMML in mind. By exporting a model to PMML, it makes it interchangeable for other organizations or departments that adhere to that standard.
Tirelli noted that the standards can help create reports on how an AI model works, and what results it generates, that business users can understand.
Enterprises want to justify their actions, and with a black box model, that can be difficult, Tirelli said. The standards help explain those decisions, making them easier across user types.
"You have to show users why a decision is made," Tirelli said. An AI ethics framework and team, as well as the use of AI standards, can help with that.
With an effective AI ethics framework in place, companies will have a better idea of how AI can fit into their different business processes, and they can help gain or maintain the trust of consumers.
Now, despite widespread cases of mishandling of personal data, many consumers still aren't fully aware of which bits of personal data are collected, and what the implications of giving up their information are.
Currently, "when you go an buy a service that's related to consumer electronics, there's a tendency to quickly brush through any contract you see," said Darren Mann, VP of global operations at automotive telematics vendor Airbiquity.
Part of the problem lies with enterprises and technology companies, which have historically been purposefully vague about which data they collect. As more regulations are put in place, or as existing ones, such as the GDPR, are more strictly enforced, organizations will be forced to provide detailed information to consumers on which data is collected and how it is used.
Enterprises that adhere strictly to these regulations and use consumer data responsibly will ultimately benefit, Mann said.
Mann sees taking data privacy regulations seriously and working with a responsible set of AI and data governance standards as "a good thing," he said.
While it may not explicitly benefit enterprises now, in the near future it will help enterprises increase consumer trust.
So, Mann said, "treat your data with respect." In doing so, consumers may trust your brand more.