Getty Images/iStockphoto

As HR adopts AI in hiring, the risks are mounting

On July 5, New York City's anti-bias law requires independent audits of AI hiring tools and publication of findings, which creates new risk and could discourage use of these tools.

Concerns about using AI in hiring are rising. Emerging regulation and litigation could prompt some in HR to conclude that AI's risks are too great. The application to HR technology faces a new challenge July 5, when New York City's anti-bias law takes effect.

This law requires employers -- not vendors -- to evaluate hiring tools that use AI through independent audits, even if they are not using those tools to make final hiring decisions. Employers will have to post a detailed summary of the audit's findings online, which could create peril for employers if the findings point to bias.

"This law is going to discourage the use of these tools in New York City," said John Rood, founder and CEO of Proceptual in Highland Park, Ill., an AI compliance firm that conducts independent audits. He said the public posting of audit reports is a "huge issue" and potentially opens employers up to new risks.

John Rood, founder and CEO, ProceptualJohn Rood

Employers are used to running bias tests to ensure they aren't discriminating and are meeting the requirements of federal law. If they find a problem, they can return and fix it, he said. But NYC's AI bias law creates a new challenge.

If the audit indicates bias, and that finding is published online, employers have "created a new liability for themselves," Rood said. A bias finding is something that the U.S. Equal Employment Opportunity Commission (EEOC) "is going to be very interested in," he said.

New York state lawmakers might take NYC's law statewide. California lawmakers are considering a similar bill. Other jurisdictions weighing audit laws include the District of Columbia, Connecticut and New Hampshire.

How do we ensure that the results we are looking for are presented in an unbiased and socially responsible way?
Nathaalie CareyHR chief of staff, Prologis

AI risks still too great

AI hiring that can rank and sort candidates has been around for about five years. The argument from vendors is that the technology is less biased than humans. But for some HR managers, the AI risks are enough to make them cautious.

Among them is Nathaalie Carey, HR chief of staff at Prologis, a San Francisco-based logistics real estate company with 2,500 employees.

Carey compares the risk of AI in hiring to self-driving cars: She's not ready to take her hands off the steering wheel and let the AI drive the car. In time, that might change, but for now she is using caution.

"How do we ensure that the results we are looking for are presented in an unbiased and socially responsible way?" Carey said.

Nathaalie Carey, HR chief of staff, PrologisNathaalie Carey

Carey has been collecting data on AI-enabled HR tools. Some vendors won't say how the algorithms operate -- information that would help Carey's HR team have confidence in the tool's results. "We are told it's proprietary," she said.

"The biggest thing that builds confidence is truly time," Carey said. And HR users of these systems "are still early adopters."

Increasing regulation is one challenge facing HR's use of AI hiring tools. The second shoe to drop is litigation. The EEOC has warned about the AI risk of bias. Federal regulators have yet to file a lawsuit, but hiring-related lawsuits that are seeking class action status are emerging.

In April, CVS Health Corp. was sued in a Massachusetts Superior Court by Brendan Baker, who had applied for a job in 2021. CVS used HireVue's video interviewing system and a tool by Affectiva "to understand human emotions, cognitive states, and activities by analyzing facial and vocal expressions," the lawsuit claimed. It argued that the technology amounted to a lie detector test, which is illegal for employers to use under state law.

HireVue stopped visual analysis in 2021 after concluding that "visual analysis has far less correlation to job performance" than language data.

In a statement to TechTarget Editorial, Lindsey Zuloaga, HireVue's chief data scientist, said, "HireVue assessments use machine learning to analyze a candidate's transcribed answers to interview questions -- these algorithms are also locked, meaning they don't change when interacting with candidates, and they do not look at anything visual or analyze tone, pauses or other forms of inflection." Zuloaga continued by stating, "Our assessments are not, and have never been, designed to assess the truthfulness of a candidate's response."

In February, a lawsuit filed in federal court in Oakland, Calif., alleged Workday's AI software enables discrimination against Black, disabled and older workers. The plaintiff, Derek Mobley, claimed the AI screening tools "rely on algorithms and inputs created by humans who often have built-in motivations, conscious and unconscious, to discriminate." Workday said the lawsuit was without merit.

More human involvement

Independent audits can show whether AI tools discriminate based on gender, race, ethnicity, disability and other protected classes. Audits are useful tools, according to Rania Stewart, an analyst in the Gartner HR practice.

The auditors are staking their businesses on their ability to conduct good audits, Stewart said. But if employers that conduct third-party audits still face lawsuits, the value of the audits could be questioned. "There is a lot of cautious optimism with those third-party audits at this point," she said.

Stewart said AI recruiting tools are shifting from selecting candidates to finding or sourcing those who have the best skills. That requires more human involvement from HR. It is an approach that is changing how HR uses AI. "It's not if we use AI; it's how we use it and how we use it to augment versus just automate," she said.

David Lewis, president and CEO of OperationsInc, an HR consulting firm in Norwalk, Conn., said employers should have a "healthy amount of skepticism" about using AI hiring tools and consider the AI risks.

While AI in applicant tracking systems has potential, he said, "you're still at the point where it's just like the first flat-panel TV," which lacked high definition and other technologies.

The AI risks are such that even if employers buy this technology through a reliable and proven source, they can't assume a vendor will "have your back in a worst-case scenario," Lewis said.

Patrick Thibodeau covers HCM and ERP technologies for TechTarget Editorial. He's worked for more than two decades as an enterprise IT reporter.

Next Steps

Global hiring tech vendor Oyster treads carefully with AI

The human problem with generative AI in HR

Dig Deeper on Talent management

SearchSAP
SearchOracle
Business Analytics
Content Management
Sustainability and ESG
Close