Getty Images

The human problem with generative AI in HR

Generative AI's integration into HR software and processes presents both internal and external challenges, with concerns ranging from technology errors to human complacency.


Listen to this article. This audio was generated by AI.

Much can go wrong with generative AI in HR, ranging from errors in the technology to mistakes by humans operating these systems. The problems can come from two different directions: internal and external.

Generative AI requires human oversight, but experts say attention might fall away as workers become too complacent and trusting of the technology. Generative AI's ability to give wrong answers convincingly and its bias potential are also part of the risk of AI in HR.

Generative AI might also deliver external threats to HR. Job seekers can use large language models (LLMs), such as those that underpin ChatGPT, to craft resumes and cover letters with the right experience, words and phrases to be selected by an AI-enabled applicant screening system. That can make more work for recruiters in verifying various claims.

Nonetheless, HR managers are feeling pressure to adopt generative AI. In Gartner surveys, 76% of HR leaders have reported that "if they don't adopt [generative AI] within the next one or two years, they'll be lagging," said Eser Rizaoglu, a Gartner analyst speaking at the company's ReimagineHR Conference in Orlando on Monday.

Using Gartner's hype cycle methodology, which it contends represents the typical path of new technologies, Rizaoglu said generative AI interest has reached the "peak of inflated expectations." Following rising expectations is the "trough of disillusionment," which may have some users giving up on the technology.

Rizaoglu believes it will take two to three years for generative AI to start to demonstrate maturity -- the final stage of Gartner's five-phase hype cycle.

But there will be pitfalls to HR's adoption of AI, including the risk of putting too much trust in AI recommendations, said Balaji Padmanabhan, professor of decision, operations and information technologies at the University of Maryland's Robert H. Smith School of Business.

The complacency risk "will never go away," Padmanabhan said. "And once the comfort level increases, that risk may actually increase in time."

The risk of too much trust

One of the initial generative AI applications is writing job descriptions. Here, Padmanabhan can see how complacency might take over. He said employees might stop checking future job description outputs once the AI system correctly constructs the first 10 job descriptions.

Padmanabhan said other generative AI risks include the "huge problem" of incorrect answers. LLMs don't understand the underlying knowledge in their data; instead, they learn how to connect words. "They're learning the structure of language, which is what they're meant to do," he said.

Padmanabhan said generative AI needs verification systems to double-check any outputs. One model might be verification as a service, where humans with expertise in benefits, for instance, review an LLM's responses.

John Farley, managing director of the cyber liability practice at Gallagher, a risk management and HR and benefits consulting firm in Chicago, believes enthusiasm about new technologies pushes aside discussions of risk.

"All too often as a society, we tend not to think about the things that can go wrong," Farley said. "We gravitate toward efficiencies and get excited about the potential that new technology brings to an organization."

To protect themselves from such risks, employers will need to take several steps, such as having contracts with vendors that address liability on issues such as intellectual property protection, according to Farley. They'll also need strong data governance programs to ensure that AI systems are used correctly by employees, he said.

A graph showing interest in blockchain and the metaverse from 2018 to 2023.
Gartner says new technologies start off with inflated expectations, illustrated by the path of blockchain and metaverse technologies, which are seeing declining interest. Right now, interest in generative AI is high.

Good governance key to AI in HR

Job applicants who use generative AI also complicate HR's work, said Trevor Bogan, regional director Americas at Top Employers Institute, a global HR certification provider based in Amsterdam.

Bogan said that candidates use generative AI to enhance their resumes, which may lead to over-reliance on superlatives and produce inaccurate and misleading information.

As humans, we try to find signs of consciousness in almost anything.
Flavio VillanustreVice president of technology and global chief information security officer, LexisNexis Risk Solutions

Recruiters will "have to do a little bit more research to make sure that this person is who they say they are," Bogan said.

Christopher Hojnowski, vice president of technology and cyber product head at Hiscox, a global business insurance provider, said that as "much as AI can go wrong, AI can do right," he said.

But Hojnowski recommended that employers have strong policies governing how employees use generative AI systems. For instance, they might establish that employees only post an AI-generated job description after human review.

"It's going to be up to the companies to write strong guidelines around how AI can be used," Hojnowski said. That includes setting expectations that employees "can't be complacent."

But employees could still become too trusting of AI systems, said Flavio Villanustre, vice president of technology and global chief information security officer at LexisNexis Risk Solutions.

There is a risk of misuse because some employees may naturally assign too much trust to the model, leading to problems later, he said. They might look for human characteristics in AI systems or believe that the system has a personality or even human-like reasoning, also known as artificial general intelligence.

"As humans, we try to find signs of consciousness in almost anything," Villanustre said.

Patrick Thibodeau covers HCM and ERP technologies for TechTarget Editorial. He's worked for more than two decades as an enterprise IT reporter.

Dig Deeper on Talent management

SearchSAP
SearchOracle
Business Analytics
Content Management
Sustainability
and ESG
Close