Laurent - stock.adobe.com
A California lawmaker recently introduced a bill, the Automated Decision System Accountability Act of 2021, which would require testing algorithms for bias, including those used in hiring. In New York City, the overwhelmingly Democratic city council is considering a similar AI bias testing requirement.
Congress may be next -- and soon. In 2019, two Democratic Senators, Ron Wyden of Oregon and Cory Booker of New Jersey, introduced the Algorithmic Accountability Act. The bill would require testing for AI bias in any automated decision system (ADS).
The bill didn't advance in the Republican-controlled Senate, but with Democrats controlling both Houses, the outlook for adoption may change in the new legislative session. The bill is expected to be reintroduced, according to two Senate sources.
Collectively, the bills in California, New York City and Congress represent some of the earliest efforts by lawmakers to regulate AI and require proof that these systems aren't biased. The legislation broadly affects any AI-enabled ADS, but hiring systems are considered a top area of concern for AI bias.
"You can't do anything about a problem until you measure it, and this [Wyden/Booker bill] says go measure it," said Mark MacCarthy, a senior fellow in the Institute for Technology Law and Policy at Georgetown Law and a senior fellow at the Brookings Institution.
AI may be better than humans
The decision-making by AI systems has to be validated, and the recommendations made to employers checked for fairness, according to MacCarthy. Are they discriminatory against the people who are protected by law?
Mark MacCarthySenior fellow, Institute for Technology Law and Policy, Georgetown Law
But if done right, MacCarthy believes AI can remove some biases from hiring, especially when compared to human decision-making.
"If there is any part of decision-making that has got all these weird, subjective, tainted discriminatory biases that operate in human decision-making, it's employment," MacCarthy said.
Ben Eubanks, principal analyst at Lighthouse Research & Advisory, said disclosure laws like those being proposed by New York City and California may help employers.
Noting the New York law in particular, Eubanks said "the good thing about this is that it would force employers to really understand how their systems are making decisions, what factors are being evaluated, and it should offer them some peace of mind that the tools are not biased against any specific populations."
One firm that may be anticipating future laws is HireVue Inc., an automated interviewing vendor based in South Jordan, Utah. This week, it released an algorithmic audit by O'Neil Risk Consulting & Algorithmic Auditing (ORCAA) of its video interview platform. The audit did not find any AI bias.
The majority of customers use the platform just for video interviewing, but the remainder use HireVue's natural language processing analysis, a branch of AI.
For instance, candidates may be asked about working in a team environment and to describe their contributions. Natural language processing will analyze the language. Someone who uses the word "we" consistently "tends to correlate more toward team orientation than a candidate that might use the word 'I,'" said Kevin Parker, chairman and CEO of HireVue.
Last year, HireVue decided to stop using facial expression analysis, a type of AI used to capture and categorize small facial movements, because concerns about it have been growing, Parker said. Natural language processing has improved so much over the years, the firm determined it didn't need facial expression analysis to provide a substantive evaluation of an interview.
The facial analysis did generate some value, Parker said, "but it was disproportionate to the concerns it was raising."
Regarding the legislation to regulate AI, Parker said the firm is "generally supportive of things that improve transparency."