sdecoret - stock.adobe.com
The Equal Employment Opportunity Commission has been increasingly warning employers to be careful about the risk of bias when applying AI to hiring systems.
In November, Charlotte Burrows, EEOC chair, questioned the effectiveness of some AI HR tools and described some vendors as selling "snake oil." The next step may be legal action.
On Thursday, the EEOC and the U.S. Department of Justice published a new guidance document for employers that underscores "how algorithms and artificial intelligence can lead to disability discrimination in hiring." They included a separate technical paper on AI bias in hiring that discusses the need for employment screening safeguards.
Kristen ClarkeAssistant attorney general, Civil Rights Division, Department of Justice
"Algorithmic tools should not stand as a barrier for people with disabilities seeking access to jobs," said Kristen Clarke, assistant attorney general of the Justice Department's Civil Rights Division, in a statement.
Paul Starkman, a labor and employment attorney for Clark Hill PLC in its Chicago office, said the joint EEOC-DOJ statement signals that investigations and "enforcement activity will also follow."
Employers need to understand how they use AI in their hiring and employment processes and "make sure that you have adequate safeguards," and that the AI will not perpetuate discrimination and bias, Starkman said.
HR's next step
One immediate step employers can take is to see whether their HR vendors will indemnify the company. If there is an investigation, will the vendor "step in and defend the AI and cooperate in the investigation to show that the AI is not perpetuating discrimination and bias," Starkman said.
Even as it warns about AI bias in hiring, the EEOC has also argued that AI hiring tools used to sort candidates can do just the opposite and eliminate human discrimination.
"AI can help eliminate bias from the earliest stages of the hiring process," said Keith Sonderling, an EEOC Commissioner and an attorney, at HR.com's Empower HR Tech conference this week.
An AI-enabled resume screening program "can be taught to disregard variables that have no bearing on job performance," such as the applicant's name, sex, national origin, race and other facts, Sonderling said.
But he also warned that these tools can introduce technological bias due to "blind trust that robots will always get it right."
Sonderling added that "inaccurate, incomplete or unrepresented data will only amplify rather than minimize bias."
"Using AI to make decisions ordinarily made by HR professionals can have significant legal ramifications," he said.
Patrick Thibodeau covers HCM and ERP technologies for TechTarget. He's worked for more than two decades as an enterprise IT reporter.