putilov_denis - stock.adobe.com

Q&A: Prioritize AI adoption while safeguarding human skills

In this Q&A, Gartner analyst Arun Chandrasekaran unpacks some of generative AI's invisible undercurrents and what to do about them.

ORLANDO, Fla. -- These questions are on many business leaders' minds: What am I missing when it comes to AI? Are there hidden risks I should know about? And what can I do to protect my organization?

At Gartner's Data and Analytics Summit this week, identifying and mitigating those risks were top of mind. After all, what is the use of investing in costly, time-consuming AI initiatives if there are hidden issues that could wreak havoc later?

In his session, "Generative AI's Invisible Undercurrents, 10 Blind Spots CDAOs Aren't Watching but Should," Aran Chandrasekaran, distinguished VP analyst at Gartner, identified 10 prominent concerns that should be on everyone's radar. Undercurrents ranged from shadow AI and ethical liability to technical debt and ecosystem lock-in.

The following interview took place at Gartner's Data and Analytics Summit. In it, Chandrasekaran explores two of the 10 invisible undercurrents of AI that put businesses at risk: equating deployment with true adoption and the erosion of human skills.

Editor's note: The following interview was edited for length and clarity.

Why is it essential to consider deployment and adoption as two separate entities when we're talking about value drivers for AI?

Arun Chandrasekaran: Deployment is how IT measures what they're delivering within the organization, but it doesn't equate to adoption.

It's a very common pattern we see. For example, you deploy an internal AI chatbot, meaning everybody within the organization has access to it, and the usage of the chatbot perhaps is hovering at 95% at the end of month one. At the end of month three, adoption has gone down to 70%. At the end of month six, adoption has gone below 50%.

Some of that attrition is natural because people are going to make their own decisions on whether they want to use AI or not. But the fact that adoption continues to go down also signifies a different problem within the organization.

That problem can be multifold. One problem could be not stress testing the application before deploying it -- thinking that the application is going to be inherently useful, but the perceived value from users is far less. The second reason could be not adequately integrating the application into users' workflows. The third reason could be that the users aren't trusting AI very much.

What are your main recommendations for an organization looking to move past deployment and garner value through adoption?

Chandrasekaran: First, evaluate and test applications very rigorously, simulating how real-world interactions are going to look. The second piece of advice is to conduct user testing. Build a prototype and test it with users, both its usefulness and how integrated it is into existing workflows.

I would also argue that you must constantly update what you've released to make sure it's current. You must have a team that's looking at new techniques and trends that are emerging. And I don't mean just the AI models themselves -- a large part of this is also making sure you're bringing the right data into these workflows.

Also, collect feedback loops from the users on the product's usefulness and what can be done to make it better. These could be very simple things, such as thumbs up versus thumbs down or open-ended responses. Make sure you collect responses and bring that feedback into the product.

Arun Chandrasekaran stands on stage at Gartner Summit next to a PowerPoint slide highlighting three main recommendations.
At Gartner's D&A Summit 2026, Chandrasekaran highlighted recommendations to mitigate GenAI risks, including tips for ensuring successful AI adoption.

You mentioned in your session that you brief AI champions or AI coaches who are appointed by their organization. What's the purpose of an AI champion? And what are the main aspects of their role they should know?

Chandrasekaran: The broad goal is democratization of AI. Organizations want employees to use AI because they believe it's going to make them productive and make the organization more competitive.

But adoption of AI, like any new technology, can be very challenging because employees have their own views on AI and the ways in which they want to work.

The whole idea behind an AI champion is to have evangelists within individual teams. Organizations are starting to create what they call a 'community of practice,' which is people who are reasonably well-trained and skilled in AI.

These people undergo additional training to understand their organization's AI strategy and policy, with the goal that they'll instill that knowledge within their microcosm, which is within their product teams and business units, where they become mini thought leaders on all things AI.

They're also focused on finding opportunities where AI can be effectively used within those respective teams and making sure any employee concerns or questions around AI are handled. The whole idea is to populate these people across teams to democratize AI.

You mentioned that skills erosion is one of the most consequential risks of AI. Why is skills erosion so dangerous? What are your recommendations for striking a balance between adopting AI and safeguarding human skills?

Chandrasekaran: Of all the 10 things I spoke about, skills erosion is perhaps the most significant one -- we often don't understand the long-term impact of AI adoption.

I'm truly an optimist when it comes to AI, but we have to think through the long-term consequences of over-reliance on AI and not practicing the skills that are extremely critical for an organization to be successful and competitive in the future.

Don't ask AI for an answer. Say to it, 'Here is where I'm stuck. How do I go from point A to point B?'

Skills erosion happens when we don't practice those skills or use them as frequently as we did before because we are becoming over-reliant on AI.

I'm sure a lot of us are already facing this in our personal lives, where it almost seems like we don't want to think a lot these days. If a problem is just hard enough, we're going to go to AI and ask it, 'Hey, how do I solve this? What's the answer?'

This is particularly visible in the student population. I have two young kids, and I always tell them, don't ask AI for an answer. Say to it, 'Here is where I'm stuck. How do I go from point A to point B?' That's how you should be using AI. I'm not going to ask you not to use AI because understanding AI and interacting with it is a very critical skill in our personal and professional lives. But we cannot stop exercising the things that make us what we are.

Organizations need to think about the ways they can get employees to practice and hone those core skills. This is also about educating employees on what AI is good at. What is the intention of using AI in a business function or a process? Imagining the entire workflow and thinking -- this is where AI is going to help us. And this is where we want to use human judgment to complement what AI is giving us.

Organizations need to define that clearly moving forward. It's not easy, because AI tools are so pervasive. It's almost to the point where it's addictive, I would argue. But we need to figure out ways to preserve the skills that are critical for us.

Olivia Wisbey is a site editor for Informa TechTarget's AI & Emerging Tech group. She has experience covering AI, machine learning and software quality topics.

Dig Deeper on AI business strategies