metamorworks - stock.adobe.com

Feds look for AI fingerprints in HR job rejections

The U.S. Equal Employment Opportunity Commission is working to learn all it can to detect AI-based discrimination in screening, recruiting and hiring.

In the last year, the U.S. Equal Employment Opportunity Commission has filed more than 140 lawsuits alleging employment discrimination, a 50% increase over the previous year. The majority of filings are against companies accused of discrimination, bias and harassment -- not of their misuse of automation and AI systems.

But the dearth of enforcement against HR's misuse of AI-enabled recruiting systems doesn't mean the agency isn't looking for evidence.

Over the past two years, the Equal Employment Opportunity Commission (EEOC) has drawn attention to the risk of AI bias in hiring. It's also improving its expertise and investigative ability to identify problems. The agency said it is working with HR software vendors and employers to educate them on identifying potential discrimination in AI systems.

But in a recent wide-ranging Brookings Institution forum, EEOC Chair Charlotte Burrows acknowledged some problems with identifying AI discrimination. Such challenges include that individuals denied jobs may "have no idea" automation was responsible, she said.

Burrows said the EEOC is training its investigators to understand what questions to ask and for which clues to look. If candidates received job rejections at 2 a.m. or five minutes after they apply, for example, that was probably not the work of a human, she explained.

The agency is also looking at algorithmic-based employment screenings for signs of bias, such as gamified assessments. The EEOC's bias concerns extend to recruiting. Some companies proactively send job ads to specific individuals "based on an algorithm about what they believe individual characteristics are," Burrows said.

[AI development and civil rights law] are two different worlds.
Charlotte BurrowsChair, U.S. Equal Employment Opportunity Commission

One of the most significant challenges in HR's use of AI is the complexity and opacity of these systems. She said that understanding how AI algorithms work requires specialized expertise, making it challenging for civil rights experts to bridge the gap between the law and AI technology. "They're two different worlds," she said.

Another problem is the lack of diversity in AI development, which Burrows said may inadvertently discriminate against or disadvantage certain groups.

The EEOC filed its first automated-related lawsuit in 2022 against iTutorGroup Inc., an English language tutoring service, for using software that automatically rejected applicants due to age. A settlement in August provided $365,000 to rejected applicants.

The settlement prompted employment attorneys to warn HR departments about the risks involved with automated decision systems.

"Vendors do not have responsibility for the decisions their tools make," wrote Jennifer Dunlap, an employment attorney at Baker Donelson, in a blog post following the settlement. "Thus, employers cannot rely solely on representations from vendors regarding their software's compliance with discrimination laws." She said employers must ensure that their employees are properly trained and do not inadvertently create bias.

Patrick Thibodeau covers HCM and ERP technologies for TechTarget Editorial. He's worked for more than two decades as an enterprise IT reporter.

Dig Deeper on Talent management

SearchSAP
SearchOracle
Business Analytics
Content Management
Sustainability
and ESG
Close