How ChatGPT might change HR's job

HR tech companies are racing to find ways to use LLMs, which can speed up tasks such as writing communications. But the technology comes with risks and a need for human oversight.

As a reporter, I think about how technologies such as ChatGPT could put me out of work. That concern is a good way to explain how this technology has been the biggest thing since Tim Berners-Lee invented the World Wide Web. It's going to change a lot of things, especially in HR.

Like reporters, HR professionals have jobs requiring writing, communication and analytical ability -- skills that ChatGPT is very good at. For this reason, HR is one of the leading business applications for these large language models (LLMs).

The ChatGPT API was made available in early March, and already HR vendors have released beta implementations that use this technology to write job descriptions. But humans will still need to check the work of machine-generated text, and that's good news for people who have jobs that require writing. Accuracy is a significant problem with the output of LLMs.

This technology is transformational

Apart from the technological capability of LLMs is the sheer enthusiasm and interest in this technology. That's what struck me the most when interviewing Adam Robinson, CEO and co-founder of Chicago-based Hireology, an HR and recruiting systems vendor, and Sultan Saidov, co-founder and president of the London-based Beamery, a talent lifecycle management vendor. Both companies moved quickly to provide a beta for preparing job descriptions using LLMs.

But what did Robinson and Saidov think about LLMs generally -- and about their importance?

Robinson described the introduction of this technology as "a before-and-after moment," and Saidov called it "transformational."

Vendors tend to oversell new technological developments, but in this case, the HR vendors aren't hyping their own technology; they are racing to find broad ways to apply LLMs to their products.

The last time I saw this level of interest or enthusiasm in a technology change was when the world moved from the text-based internet to the web's graphical interface. After Netscape released its first browser in 1994, many businesses realized the web's transformative potential, and there was a rush to adoption. The same thing is happening today.

LLMs can speed up many HR tasks, including writing emails, memos and even performance reviews; analyzing and translating dashboards into text; customizing training materials; and offering much-improved humanlike chatbots that provide onboarding and benefits information. HR vendors will work to figure out how LLMs can be married to internal HR processes and how they can use in-house and external data -- because to work effectively, LLMs need access to a vast amount of data.

No upper bounds to advancement

Undoubtedly, some HR practitioners are already using ChatGPT and Bard ad hoc to help summarize documents, write emails or reports, and conduct research. The exposure to a technology that's easy to access with a few keystrokes and button clicks naturally fuels interest in HR applications that use LLMs. HR vendors will need a good response to HR pros' questions.

But the true potential of LLMs has yet to be realized. These pattern-recognition systems can predict the next word and run analytics, and might have the ability to generate original hypotheses. It's the critics who are defining their potential.

AI systems are growing ever more powerful -- and we don't know their upper bound.
'Pause Giant AI Experiments: An Open Letter' FAQFuture of Life Institute

The recent open letter signed by some tech CEOs, such as Elon Musk, and a long list of academics called for a pause on the training of models larger than GPT-4 for six months. Apart from job losses, they are worried about how bad actors might use LLMs to mislead people or how "a model meant to generate therapeutic drugs could be used to generate novel biochemical weapons instead," something researchers were able to prove in 2022, according to the letter's FAQ.

The LLMs might improve significantly faster than Moore's law, which states that the number of transistors on a chip will double every two years. But Moore's law faces limitations because transistors can shrink only so much. The open letter noted that AI has no limits. "AI systems are growing ever more powerful -- and we don't know their upper bound," something noted in the FAQ.

In HR, trust in LLMs will not be high. AI researchers and regulators have long warned about the risk of bias and the possibility of lawsuits over AI uses. Despite the risks, HR will adopt LLM technology because it will be a timesaver for some processes, such as producing first drafts of job descriptions or letters and scheduling interviews with job candidates. These are functions that still require human oversight -- for now.

Patrick Thibodeau covers HCM and ERP technologies for TechTarget Editorial. He's worked for more than two decades as an enterprise IT reporter.

Dig Deeper on Talent management

SearchSAP
SearchOracle
Business Analytics
Content Management
Sustainability and ESG
Close