sdecoret - stock.adobe.com

New skills in demand as generative AI reshapes tech roles

With generative AI adoption on the rise, employers are prioritizing creativity and problem-solving alongside technical skills for roles in software development and data science.

Interest in generative AI has skyrocketed across job functions this year, and technical roles are no exception.

In recent research by Stack Overflow, 70% of more than 67,000 professional developers surveyed were already using or planning to use AI tools. But as the initial hype begins to fade, the need for realistic evaluation of generative AI's capabilities is becoming clear: In the same group, less than 3% of respondents reported a high level of trust in AI output.

Integrating AI into enterprise tech workflows is complex, requiring careful evaluation of the potential risks and benefits. And as use of generative AI becomes more common, organizations' technical hiring processes and internal policies will need to evolve accordingly.

Employers are seeking out expertise with generative AI tools

Familiarity with generative AI is emerging as a valuable skill for technical roles such as software development, engineering and data science. This includes both conceptual understanding of how the technology works as well as hands-on experience with tools such as ChatGPT and GitHub Copilot.

As companies' digital transformations progress, the demand for tech talent continues to exceed supply. Used judiciously, generative AI could enhance technical employees' productivity by reducing time spent on repetitive tasks such as writing boilerplate code, helping to mitigate that skills gap.

"The demand for people who can use large language models on the job effectively will shoot up," said Dan Finnigan, CEO of technical hiring platform Filtered.

However, using AI effectively requires recognizing its limitations. Tools for developers and engineers are still more experimental than production ready, necessitating close human oversight.

"It's kind of like having a junior engineer sitting right next to you who has a lot of knowledge, [but] might not necessarily know how to deploy it," said Preeti Kaur, head of engineering at Honor, a home care network and technology platform.

Users of generative AI need sufficient knowledge to recognize when a model's output is incorrect. If engineers can't reliably identify and fix a tool's inevitable errors, or if a model produces low-quality output too often, organizations could lose any potential productivity gains to debugging flawed AI-written code.

"We're trying to solve for productivity," said Bhawna Singh, CTO of Customer Identity Cloud at Okta, an identity and access management company. "We don't want to turn the cycle back."

The growing demand for AI, data science and infrastructure skills

In addition to proficiency with generative AI tools themselves, the AI boom is also likely to drive demand for skills related to building and managing those systems.

Generative AI-related listings on job portal Indeed have more than doubled since 2021. And the broader job market for AI, machine learning (ML) and data science roles is also expanding: The U.S. Bureau of Labor Statistics projects that employment of data scientists will grow by 36% between 2021 and 2031.

Recent months have seen enterprise interest in private, customized LLMs trained on industry-specific and proprietary data sets. Skills related to tailoring models to organizational architectures and data are therefore likely to become particularly valuable moving forward.

An internal model would be an appealing use case for a company like Okta, Singh said -- for example, an LLM trained on Okta's extensive data on authentication, authorization and threat actors' attack techniques. "That will be certainly one skill set that we would look into," she said.

Similarly, if widespread AI adoption continues, companies will need IT teams that can manage the underlying infrastructure for AI systems. Large generative models are often resource-intensive and computationally expensive.

To manage those deployments, knowing how to efficiently plan resource use and manage IT infrastructure is crucial. To build and maintain the IT architecture required to run AI at scale, organizations will need employees with skills such as cloud cost management, efficient systems design and hardware optimization.

New skills and even job roles have also emerged as a result of generative AI. At some organizations, prompt engineering -- the practice of crafting instructions for AI systems that elicit the best possible results -- is a new position in its own right. But it's also a valuable skill outside of specialized AI roles.

"Getting comfortable with the prompts ... and making [generative AI] useful for your use cases, I think that's general know-how," Singh said. "It's certainly something that everyone will have to embrace, whether you are in engineering or not, or whether you are in the ML space or not."

Understanding how to get the most out of generative models is key to maximizing such tools' usefulness, but it also evinces skills like creativity and problem-solving. "I feel like every engineer has to have some kind of a prompt engineering mindset now," Kaur said.

The Filtered platform is introducing job simulations designed to test candidates' prompt engineering abilities, Finnigan said. Employers can evaluate candidates' creativity and persistence through how they approach the task of writing effective prompts, including iteratively revising their input in response to mistakes or unexpected output.

Generative AI's effects on technical hiring

While generative AI skills are increasingly sought after, the technology itself poses unique challenges for technical hiring.

It remains difficult to reliably identify generative AI output. Consistently accurate tools for detecting AI-generated code don't yet exist, and attempts to create AI text detectors have so far been disappointing. OpenAI, for example, recently took down a classifier intended to differentiate between human- and AI-generated text, citing its "low rate of accuracy."

This creates a new challenge in technical hiring: knowing whether a candidate wrote code themselves or instead relied on ChatGPT or a similar tool. Overdependence on generative AI could cause problems down the line if a candidate misrepresents their skills and can't keep up with the complexity of real-world enterprise environments.

One solution could be to reduce emphasis on unmonitored coding challenges in favor of in-person whiteboarding or supervised technical assessments. And asking nuanced follow-up questions can help employers gauge a candidate's analytical skills and knowledge of underlying concepts.

At Okta, for example, interviewees might be asked to explain their reasoning or compare alternative approaches to a problem, such as optimizing for memory versus performance, Singh said. If a candidate can confidently answer those questions, she doesn't view using generative AI as a problem; in fact, it might even be a positive, signaling creativity and openness to experimentation.

If a candidate can't competently explain or modify their AI-generated code, "then yes, that's not the engineer we need," she said. "Because now you have someone who's just using it without understanding, and that's scary and that's concerning. But if you're understanding it ... I would personally say it's OK."

Finnigan shared a similar sentiment. Filtered's platform now signals to employers when a job candidate uses ChatGPT during a technical assessment. (Currently, the software only tracks ChatGPT use, though Finnigan anticipates that Filtered will eventually need to provide information on use of alternative tools such as GitHub Copilot or Tabnine.)

But if a candidate does use ChatGPT during the hiring process, that shouldn't automatically be a red flag, Finnigan said. What matters is how a candidate incorporates generative AI into their thinking and coding process as a problem-solving tool.

"The purpose of the recruiting process is to determine whether [a candidate] can code and whether they write good code -- and, if they can, whether they can solve problems," he said. "And if they can, then they'll probably be really good at using generative AI in a productive way."

Responsibly integrating generative AI into technical workflows

Generative AI has substantial promise for tech employees' productivity if it's implemented carefully. For organizations, the key is to minimize risk without discouraging innovation.

Singh encourages leaders to be realistic: Developers and engineers typically want to experiment with new tools and platforms, which means that outright banning use of generative AI might not be desirable or even achievable.

"That's not the space we live in in tech," she said. "Accepting the hype, looking into it and giving guidance -- that's the goal."

And that's not necessarily a bad thing. Attempting to ignore generative AI entirely might be riskier than allowing use within agreed-upon parameters. If no guidance is provided, employees could end up using a range of different AI tools without their employer's knowledge, creating security vulnerabilities and fragmented IT environments.

Leadership can get ahead of the problem by proactively setting expectations for appropriate AI use at the organizational level and developing internal policies accordingly. Generative AI tools are still in their early days, and security, ethics and copyright concerns loom over corporate deployments. Together, these considerations mean that open communication and clear guidance are essential.

"I think it is on leadership to now take that hype ... and create guidance and guardrails," Singh said. "To say, 'This is what is OK, this is what [we're] still figuring out, and this is not OK.'"

Developing ways to handle incorrect AI output -- a phenomenon known as hallucination -- and rigorous evaluation frameworks will also be key to successful enterprise adoption of generative AI. Currently, Kaur advocates small-scale, incremental implementation only after extensive human evaluation, including thoroughly testing edge cases.

"Having guardrails and having some idea of evaluation before you start using any of these tools is very important, because I personally don't think much is enterprise ready out there," she said.

At Honor, Kaur said that engineering teams have experimented with generative AI tools including GitHub Copilot and Sourcegraph's Cody. And although she's optimistic about their potential, especially Cody, she emphasized that extensive research and experimentation are needed before production deployments. Ongoing monitoring capabilities and consistently accurate results are especially critical in healthcare contexts like Honor's, where mistakes can directly affect patients' lives.

"It's not just about how fast you can process something, it's about how reliably you can process something," Kaur said. "And then, how do you keep checking that it's still doing what you wanted it to do?"

Dig Deeper on Careers in artificial intelligence

Business Analytics
CIO
Data Management
ERP
Close