putilov_denis - stock.adobe.com

AI accountability: Who's responsible when AI goes wrong?

Who should be held accountable when AI misbehaves? The users, the creators, the vendors? It's not clear, but experts have some ideas.

AI systems sometimes run amok.

One chatbot, designed by Microsoft to mimic a teenager, began spewing racist hate speech within hours of its release online. Microsoft immediately took the bot down.

Another system, which Amazon designed to help its recruiting efforts but ultimately didn't release, inadvertently discriminated against female applicants.

Other so-called "smart" systems have led to false arrests, biased bail amounts for criminal defendants, and even fatal car crashes.

Experts expect to see more cases of problematic AI as organizations increasingly implement intelligent technology, sometimes doing so without adopting the proper governance in place. A governance framework, and extra attention to AI ethical concerns, can help prevent unintended biases and algorithmic drift in AI models.  

"There will be large systemic disruptions and therefore large benefits to those who lead with AI. But along with that comes responsibility to set up guidelines and transparency in the work you do," said Sanjay Srivastava, chief digital officer at professional services firm Genpact.

"If you use AI, you cannot separate yourself from the liability or the consequences of those uses," he said.

Most organizations don't have the proper guidelines in place, however.

The lack of standardized guidance around AI governance, and the complexity of deep learning and machine learning models, has made it difficult for experts to decide who to blame when an AI system goes wrong.

Components of digital ethics

Creating AI accountability

Creating ethical AI requires attention to three broad and interrelated areas, according to experts: functional performance, the data it uses and how the system itself is used.

Take the first bucket -- functional performance. Organizations generally use AI-type technologies, which includes machine learning and natural language processing, to analyze data and make predictions. But all the components within AI need to function properly for the system to produce accurate, fair and legal decisions.

For example, a credit card fraud detection system must use finely tuned algorithms to protect customers against unauthorized use of their credit cards but not decline when people make unusual yet legitimate purchases. A system that lets an unauthorized use slip by, or makes too many false positives, could destroy customers' trust in it.

The second area deals with data -- more specifically, ensuring that a high volume of high-quality data is used when training and using AI. Systems work best when they have clean, unbiased data to work with, especially when they have a lot of it.

Organizations also should safeguard that data against unauthorized access and use.

The third bucket deals with preventing unintended biases -- that is, ensuring that the data fed into the algorithms, as well as the algorithms themselves, do not produce results that discriminate.

In the case of Amazon's biased hiring model, the company used resumes it had accrued over the past decade to train the model. Most of the resumes were submitted by men during that time, so the model inherently tried to hire more men than women. That setup created what could have been a legal problem if Amazon hadn't caught the biased decisions.

Similarly, creating ethical AI and AI accountability also means safeguarding against unintended uses. Most AI systems are designed for a specific use case; using them for a different use case would produce incorrect results. For example, an AI-based system designed to optimize crop yields for wheat farmers can't just be transferred to advise rice farmers looking to maximize their yields.

Organizations attentive to all three of those areas, and the nuances within each of them, are more likely to have AI-based systems that produce accurate and fair results, Srivastava said. Those organizations are also more likely to identify early when their systems aren't working well, creating a level of accountability above less attentive organizations.

The potential for something to go wrong can come out of left field.
Mike Loukides Vice president of content strategy, O'Reilly Media

No guarantees of AI success, responsibility

Even if an AI creator follows those guidelines, it's no guarantee that the creating organization would be held accountable if problems result from AI use.

"The potential for something to go wrong can come out of left field," said Mike Loukides, vice president of content strategy at O'Reilly Media, a business and technology training company.

So, what happens when AI systems drift away from producing accurate and fair results? Or what happens if they never produced them in the first place? Who is accountable for the consequences that ensue?

It's not clear.

"The idea of accountability is that if something goes wrong then someone's job is on the line, but that's not the case here in most companies. I'd like to think, though, it's at least someone's job to fix it," Loukides said.

Other experts shared that observation, noting several reasons for the lack of clear accountability with problematic AI.

Experts pointed to the relative newness of AI and its use within businesses, explaining that many corporate leaders aren't yet attuned to all that can go wrong when using this technology.

Organizations don't always have the expertise to create and maintain fair and accurate AI-based systems, said Brandon Purcell, vice president and principal analyst at the research firm Forrester. Moreover, they may not even know where their responsibilities versus other contributors' responsibilities exist within an intelligent technology ecosystem.

"For most companies, the supply chain for AI is very long and complicated," Purcell said.

He points to self-driving vehicles: Many different companies contribute to the intelligent systems that power self-driving vehicles. For example, different companies could potentially contribute data, label data and build computer vision models. Each company could blame another when something goes wrong, Purcell said.

The legal system could help sort out AI accountability, with many lawsuits around these issues, he said.

But by implementing stringent AI governance policies, companies could get ahead of lawsuit scenarios.

"This is an opportunity to codify your ethical standards," Purcell said.

Take IBM, for example, where AI accountability and ethics are made up of several pieces, according to Seth Dobrin, the company's chief AI officer.

IBM approaches AI accountability by deeming that the company alone is accountable for its AI, and everyone involved in the process of creating the scope, designing, deploying and managing that AI at IBM is responsible for it, he said.

To Dobrin, AI accountability and ethics means that the AI is transparent and explainable. It's fair, meaning free of unintended and unacceptable biases. It's robust in that it holds up to external pressures such as cybersecurity attacks. In addition, the AI system respects privacy, for example, by preserving the anonymity of the people it affects. And it respects and adheres to relevant regulations, he said.

Regulations on accountable AI on their way

Organizations may see governments taking action on AI accountability by passing laws that establish certain requirements for companies using AI systems. Lawmakers could also enact policies to hold organizations accountable when AI systems do harm.

Indeed, some government regulations are already in the works, with the European Union leading the charge.

It's not clear, however, if such actions will effectively ensure ethical, trustworthy AI and hold organizations responsible when they fall short of AI guidelines.

Policymakers and technology experts debate over how much regulation to enact. Some experts believe too much regulation could limit technological creativity and advancement. Yet too little regulation could allow harmful AI systems to run rampant. Where to draw the line is still widely up to debate.

Dig Deeper on Artificial intelligence platforms

Business Analytics
CIO
Data Management
ERP
Close