putilov_denis - stock.adobe.com

A look at AI trends and bias in AI algorithms

In the past few years, more and more organizations have focused on AI. However, just as the use of AI and machine learning has expanded, concern about AI bias is also growing.

In recent years, AI has made dramatic inroads among enterprises. More and more organizations are focusing on how to use AI efficiently and many are now automating the AI process in new ways. This year alone, newer AI trends that are seeing heightened action include automated machine learning, robotic process automation and AI in the service industry.

The burgeoning interest in enterprise applications of AI comes with challenges. For one, AI has traditionally been a heavily manual process. Many enterprises may not have the talent required to fully implement the technology. Another challenge is overseeing AI, including how organizations can deal with problems related to  AI bias -- algorithms that produce prejudiced results based on faulty, poor quality or incomplete data --  and whether governments will take a more active role in regulating the technology. Earlier this year, the European Commission released its first proposed legislation for regulating AI. The law includes fines for organizations that fail to comply.

But while more enterprises are recognizing and trying to do something about it, the reality is AI bias exists.

In this Q&A, Kashyap Kompella, CEO and chief analyst at RPA2AI Research, discusses the AI trends he's seeing and what organizations can do to combat bias within their machine learning systems.

What are some of the key AI trends you are noticing lately?

Kashyap KompellaKashyap Kompella

Kashyap Kompella: There has been major interest in AI from companies in the last few years. I'm not talking about in technical terms, but in terms of general overall perception. Because of the heightened expectations, there is a bit of an AI winter coming up.

Three or four years ago, when companies were making an annual plan, they would have 10 strategic priorities at the CEO level. Four of them would have included AI. But this year, there is no mention of AI at all. That's because people are recognizing the difficulty of commercialization of AI.

A deep learning pioneer helped form a company in Canada called Element AI. The Canadian government went all out, saying, 'This is a showcase of innovation that Canada is capable of doing.' All the biggies, Microsoft, everybody, invested in them so there was no shortage of talent; there was no shortage of big support for them and no shortage of visibility or branding. They could do anything they wanted, but they really couldn't do it. They really struggled and the company was sold off for less than the $250 million they raised. That shows the difficulty of monetizing.

There is a bit of an AI winter coming up.
Kashyap KompellaCEO and chief analyst, RPA2AI research

And on the other hand, very simple technologies like robotic process automation are gaining a lot of traction. The task of building AI is heavily manual, heavily complex, heavily human.

There is a huge opportunity for service companies in AI. Take self-driving cars. You drive a car and capture all the information. You take that video, and you need to annotate saying, 'This is a road; this is the traffic signal.' That annotation used to take about 800 hours of human effort. Imagine the kind of money that is required to do so. There is a booming sub-segment of AI for this labeling of data.

And you need to store and use all this data. So, there is a boom in hardware. If you look at a lot of the growth of companies like Nvidia, it has been because of GPUs, which are used to train AI models. GPUs usually were used to play video games and Nvidia was very good at it before the AI revolution.

How hard is it to commercialize AI?

Kompella: Commercialization is very tough. Google has an AI company called Deep Mind. But they had losses of $650 million last year. The same with Boston Dynamics. You see all these cool robot dogs doing all these dances, cool videos, but no profits.

For the kind of innovation that is possible, we need to put in place a lot of tools to make it happen. We also need a lot of ethical guardrails, which is not happening at the same pace as it should be happening. So once these two are in place, we'll see a lot of these applications, which will probably take another five years. The danger is that we are rushing forward with bigger implementations without the guardrails.

One big AI trend is automating AI and machine learning. How important are the new tools being developed to do this?

Kompella: A lot of the focus is disproportionately on how we build a machine learning model. But once you build one, it must integrate with your existing technology systems. It needs to be part of the larger workflow and business, so that once you have built the model, deploy it into production and eventually make use of it -- that field is MLOps, which is analogous to DevOps. So that's a huge area of investment. That's a huge area of innovation. Right now, the tools that we have are not standardized enough compared to other fields.

Another key AI trend is AI ethics. How can engineers account for bias in AI algorithms? 

Kompella: They don't, and that's the cause of a lot of the failures in AI systems. That's a very [significant] mission and it's an important question. There is this case of Uber self-driving cars in Arizona. There was a fatal accident because the model knew how to recognize the pedestrian, how to recognize somebody riding their bike. But the model couldn't identify a person walking their bike, so there was a crash.

What do we do when the machine does not understand? This is called a human in the loop. You want to make sure that when exceptions happen, you want to throw that to a human.

In the machine learning context, it is not happening as much as it should be because the machine doesn't know when it doesn't know.

If you're taking an exam and you're guessing, you know whether you're guessing or whether you know the answer, but the machine doesn't know. This is an active area of research in which people are trying to say, if we are 95% confident in the prediction, then we will do this. Otherwise, we are going to defer this to a human as an exception workflow. But that's not very common.

And within the trend of AI ethics is the question of bias. What responsibilities do companies have to prevent AI bias?

Kompella: In AI ethics there are four or five core principles. One is that these models must be safe. They need to be accountable. They need to be transparent. They need to be trustworthy. The current stage in the industry is that these are self-regulatory. There is no binding regulation except in specific situations, like the Fair Credit Reporting Act [which contains language regulating the use of AI]. The only instances where I've seen companies doing these kinds of checks is when there's regulation. In the absence of regulation, it's not happening

A lot of the bias is because the data being collected by companies is not representative of the real world. So, if companies pay attention to the data they are collecting, many of these problems will be solved. Then comes what kind of algorithms we use, and how the data is being used.

What's problematic about humans placing as much trust in AI as we do?

Kompella: When we talk about humans and AI, there are all these notions that AI can process so much more data than we do because it has unlimited computing power compared to any brain. And that it is also very objective compared to humans who can be biased. This is our perception. But on the other hand, the actual accuracy of the AI systems is slightly lower than that.

The accuracy for certain groups, if they're not represented well in the data, is even worse. So, in this zone is the cause for concern about AI bias. You have deployed a system thinking it is going to be accurate to this level, but it's underperforming or will underperform because of this characteristic.

Editor's note: This interview has been edited for clarity and conciseness.

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close