ltstudiooo - Fotolia

How to keep your implementation of AI free from algorithm bias

When implementing AI, it's important to focus on the quality of training data and model transparency in order to avoid potentially damaging bias in models.

Implementation of AI software in your enterprise is more akin to raising a child than installing a new business application.

"Once you start talking about learning technology, you have to realize that you're teaching it to become a good member of your company. You have to realize that it's going to be out in the world interacting with people," said Michael Biltz, managing director of Accenture Technology Vision at the consulting firm.

Biltz and his team recently put out a report on technology trends for 2018, and the challenges of raising AI effectively led the list. Biltz said this is because AI is coming to enterprises whether they seek it or not. Some will go out of their way to develop and implement cutting-edge AI software, but others will see it come to their enterprises through prepackaged applications, like Salesforce, which is pushing machine learning throughout its tool.

Training algorithms require thought

Given this, enterprises need to think about how they train algorithms and how they are deployed. There's too much potential for AI tools to be train based on biased data -- thereby producing biased bots -- and to function in ways that are opaque.

Once you start talking about learning technology, you have to realize that you're teaching it to become a good member of your company.
Michael Biltzmanaging director, Accenture Technology Vision

Biltz referred to the Google image classification system that tagged pictures of black people as gorillas or the Microsoft chatbot that learned sexism and anti-Semitism from Twitter users. It's not clear exactly how algorithm bias happens, except that the training data must have included examples of these biases.

This kind of thing is bad enough when we're talking about machine learning algorithms that deliver movie recommendations. Algorithm bias can be particularly damaging, though, as the implementation of AI moves into more sensitive areas of life, like disease diagnosis, credit decisions and hiring.

"We don't necessarily realize these [biases] are embedded in the data and information we're using to help AI learn," Biltz said.

Transparency needed for AI models

To address this problem, enterprises need to work toward better transparency and explainability in AI models, the Accenture report says.

Biltz said companies can correct biases in AI decisions if they know the role certain pieces of information play in the algorithm's decision. However, today, most advanced models operate as black boxes. They scan massive data sets, often picking up on subtle correlations that aren't obvious to humans.

Examples of algorithm bias
Algorithm bias can negatively affect the value of data derived by AI tools. Here are some examples of bias in image recognition systems.

The other way to avoid algorithm bias or inaccurate models is to focus on data quality. Biltz said this should form the foundation of any AI initiative.

"If you don't have a robust system for verifying data as you start to pump that into AI, suddenly, you end up with the potential to have AI projects failing or being flawed and biased," he said.

Ultimately, to get a good implementation of AI, enterprises may need a new executive role. Biltz said traditional technology roles like the CIO and chief data officer are more geared toward standard technology implementations, which AI is not. A chief AI officer may be needed.

"We're just starting to see AI now move from applications that have a lot of leeway that don't hurt a lot of people with a wrong recommendation, to things that are going to be much more pertinent," he said. "We are talking about a mindshift that says 'This is a change in the way you do software.'"

Next Steps

Blockchain and AI could help make models more explainable

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close