Getty Images/iStockphoto

AI vendors may have to prove systems don't discriminate

Washington state is considering a bill that would require vendors to prove their AI algorithms aren't biased. If enacted, the AI regulation could have far-reaching implications.

Washington state legislators are tackling AI regulation with a bill proposal that requires transparency into how AI algorithms are trained as well as proof that they don't discriminate -- some of the toughest legislation on AI seen to date.

Senate Bill 5116, which was filed Jan. 8 by four Democratic senators, focuses on creating guidelines for the state government's purchase and use of automated decision systems. If a state agency wants to purchase an AI system and use it to help make decisions around employment, housing, insurance or credit, the AI vendor would first have to prove that its algorithm is non-discriminatory.

The bill's sponsors said a step like this would help "to protect consumers, improve transparency and create more market predictability," but it could have wide-ranging implications for AI companies as well as organizations building their own AI models in-house.

Regulation vs. innovation

Senate Bill 5116 is "one of the strongest bills we've seen at the state level" for AI regulation and algorithm transparency, according to Caitriona Fitzgerald, interim associate director and policy director at the Electronic Privacy Information Center (EPIC).

Caitriona FitzgeraldCaitriona Fitzgerald

EPIC is a nonprofit public interest research center focused on protecting citizens' data privacy and civil liberties. The organization, based in Washington, D.C., regularly speaks before government officials on issues such as AI regulation, and submitted a letter in support of Senate Bill 5116, noting it's "exactly the kind of legislation that should be enacted nationwide."

Fitzgerald said requiring the analysis of AI models and making the review process of the analysis public are critical steps in ensuring that AI algorithms are used fairly and that state agencies are more informed in their buying decisions.

"We have seen these risk assessment systems and other AI systems being used in the criminal justice system nationwide and that is a really detrimental use, it's a system where bias and discrimination are already there," she said.

She also pointed to language in the bill that states AI algorithms cannot be used to make decisions that would impact the constitutional or legal rights of Washington citizens -- language EPIC hasn't seen in other state legislation.

For their part, technology vendors and enterprise users both fear and want government regulation of AI.

They believe strong regulation can provide guidance on what technology vendors can build and sell without having to worry about lawsuits and takedown demands. But they also fear that regulation will stifle innovation.

Deloitte's "State of AI in the Enterprise" report, released in 2020, highlights this dichotomy.

The report, which contained survey responses from 2,737 IT and line-of-business executives, found that 62% of the respondents believe that governments should heavily regulate AI. At the same time, 57% of enterprise AI adopters have "major" or "extreme" worries that new AI regulations could impact their AI initiatives. And another 62% believe government regulation will hamper companies' ability to innovate in the future.

While the report didn't gauge the thoughts of technology vendors directly, enterprise users are the main clients of many AI vendors, and hold sway over their actions.

Brandon PurcellBrandon Purcell

"There are banks and credit unions and healthcare providers who are, in some cases, building their own AI with their own internal data science teams or they're leveraging tools from the tech players, so eventually everybody who adopts and uses AI is going to be subject to a bill like this," said Forrester Research principal analyst Brandon Purcell.

The effect on vendors

Providing proof that AI models are non-discriminatory means AI vendors would have to become much more transparent about how AI models were trained and developed, according to Purcell.

"In the bill, it talks about the necessity of understanding what the training data was that went into creating the model," he said. "That's a big deal because today, a lot of AI vendors can just build a model kind of in secret or in the shadows and then put it on the market. Unless the model is being used for a highly regulated use case like credit determination or something like that, very few people ask questions."

That could be easier for the biggest AI vendors, including Google and Microsoft, which have invested heavily in explainable AI for years. Purcell said that investment in transparency serves as a differentiator for them now.

In general, bias in an AI system largely results from the data the system is trained on.

The model itself "does not come with built-in discrimination, it comes as a blank canvas of sorts that learns from and with you," said Alan Pelz-Sharpe, founder and principal analyst at Deep Analysis.

Yet, many vendors sell pre-trained models as a way to save their clients the time and know-how it normally takes to train a model. That's ordinarily uncontroversial if the model is used to, say, detect the difference between an invoice and a purchase order, Pelz-Sharpe continued.

A model pre-trained on constituent data could, however, pose a problem. A model pre-trained on data from one government agency but used by another could introduce bias.

While a technology vendor can implement a human-in-the-loop approach to oversee results and flag bias and discrimination in an AI model, in the end, the vendor is constrained by the data the model is trained on and the data the model runs on.

"Ultimately, it's down to the operations rather than the technology vendors" to limit bias, Pelz-Sharpe noted.

But eliminating data of bias is difficult. Most of the time, technology vendors and users don't know the bias exists, not until the model starts spitting out noticeably skewed results, which could take quite a while.

Forrester's Purcell said an additional challenge could lie with defining what constitutes bias and discrimination. He noted there are roughly 22 different mathematical definitions of fairness, which could impact the way algorithms work for determining equal representation in applications.

"Obviously a bill like this can't prescribe what the right measure of fairness is and it's going to probably differ by vertical and use case," he said. "That's going to be particularly thorny."

Many advanced deep learning models are so complex that even with a human-in-the-loop element, it's difficult, if not impossible, to understand why the model is making the recommendations it's making.

The bill suggests these unexplainable models won't be acceptable.

"That is a challenge in and of itself, though, as a large amount of newer AI products coming to the market rely on complex neural networks and deep learning," Pelz-Sharpe said. "On the other hand, more straightforward, explainable machine learning and AI systems may find inroads."

Still, high quality and balanced data, along with a lot of human supervision throughout the lifetime of an AI model, can help reduce data bias, he indicated.

"For a technology vendor, it will be critical that the consulting team that implements the system works closely with the vendor and that staff within the department are adequately trained to use the new system," Pelz-Sharpe said.

Impact on business with public agencies

While it's unclear how the bill would work in practice, it could affect how these technology vendors do business with public agencies in Washington, Pelz-Sharpe said.

Individual states can have a big impact on policies when they act.
Caitriona FitzgeraldInterim associate director and policy director, EPIC

The bill poses problems for vendors currently working with public agencies in particular, as it would require those vendors to eliminate discrimination in their AI models over the next year.

According to Pelz-Sharpe, that's a good thing.

"Some AI systems that are in use in governments around the world are not very good and often make terrible and discriminatory decisions," he said. "However, once deployed, they have gone largely unchallenged, and once you are an official government supplier, it is relatively easy to sell to another government department."

Indeed, EPIC's Fitzgerald said like with the California Consumer Privacy Act, companies contracting with agencies in the state have to ensure they're meeting data privacy requirements for California residents, and that could be a similar model for Washington. Making a product meet certain state requirements could broadly impact how AI is designed and built, she said.

"To get contracts in Washington state, a product [would have] to be provably non-discriminatory," she said. "You would hope a company's not going to make a non-discriminatory version for Washington state and a version that discriminates for Massachusetts. They're going to make one version. So individual states can have a big impact on policies when they act."

Next Steps

Biden's top science advisor working on AI bill of rights

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close