Why AI backlash is a leadership problem -- not a tech one
AI backlash is shaping up to be more of a trust issue than a tech one. Open communication and accountability can help overcome resistance and lead to successful AI integrations.
AI has dominated headlines, boardroom conversations and media attention as it skyrocketed from a larger-than-life technology to an everyday tool with complex regulations and considerations.
The spotlight on AI has bombarded many workers and customers with both good and bad stories about its capabilities. When leaders launch new AI initiatives or integrations, workers may hesitate or resist new tools or have unrealistic expectations about their power. In fact, 44% of respondents in an Edelman Trust Barometer report claimed to be skeptical of businesses' use of AI.
Leaders who don't take the initiative to treat AI backlash as a leadership problem, rather than a misunderstanding or lack of knowledge about the technology itself, risk alienating employees and customers and losing their trust. Leaders who tackle the backlash head-on can help employees feel heard and address their concerns, leading to more successful and effective AI integrations.
What leaders are missing about AI backlash
AI backlash is often dismissed as fear or resistance to new technology, so many leaders view it as an IT problem rather than a leadership concern.
As AI becomes a global, mainstream news topic, opinions on what it can and can't do are a hot-button issue. Persistent public conversations and media coverage about both AI successes and failures mean workers and customers have a preconceived notion of AI before it's introduced into workflows or operations.
"When leaders defend an AI initiative by saying, 'The technology works,' they are usually pointing to accuracy scores or benchmark performance," said Vishal Sharma, chief technology officer (CTO) at SearchUnify. "That response often misses the real issue. People are not reacting to the math behind the model. They are reacting to how it changes their work and who is accountable when something goes wrong."
The backlash isn't about the technology failing; it's about the human trust failing.
Karlo ZatylnyCTO, Portnox
This can lead to both public and internal resistance to the use of AI in operations or workflows. Employees may mistrust how tools are being used and how they will affect their jobs, and customers may be skeptical of how the business uses and stores their data.
"The backlash isn't about the technology failing; it's about the human trust failing," said Karlo Zatylny, CTO at Portnox. "When AI produces a confident but completely incorrect risk assessment, it doesn't just reveal a software bug. It exposes a fundamental lack of data integrity and a leadership team that values speed over verification."
When leaders fail to explain intent, impact and guardrails, they can instill fear and mistrust in workers and customers. Treating AI backlash as a technological misunderstanding instead of a leadership gap in responsible AI use, trust and transparency helps fear and skepticism grow, making it harder to rebuild trust and dispel fear.
Leadership gaps that fuel AI backlash
Although workers may already negatively view AI, several leadership gaps and organizational challenges can amplify or ignite AI backlash and resistance, including the following:
Lack of ownership. Without a clear hierarchy of ownership, transparency and accountability, miscommunication and a lack of accountability can erode trust and lead workers to resist AI initiatives. "One of the most dangerous leadership gaps I see is the 'rollout without responsibility' -- implementing tools without clear human ownership of the output," Zatylny said.
Rushed integration. When organizations rush the rollout of new AI tools, integration often precedes AI governance frameworks and boundaries for areas such as data use and storage, bias testing and incident response. Prioritizing speed over safety can erode trust between workers and leadership and raise security concerns.
Poor communication. The way leadership and managers present and discuss AI initiatives can affect how workers view them. "Leaders should continuously explain how the system is evolving, what feedback has influenced changes and where human judgment remains central," Sharma said. "AI does not create mistrust on its own. It accelerates whatever level of clarity or ambiguity already exists within the organization."
Unaddressed concerns. Many organizations may try to sweep concerns under the rug or minimize them to avoid bringing attention to them. However, ignoring workforce fears of displacement and surveillance can cause those fears to grow and spread. Addressing them head-on can help quell fear and build trust.
Narrow mindset. When leaders view AI integration as an IT project rather than a business transformation, resources and strategy get siloed within the IT department, even though AI touches all areas of the business.
How to build trust in AI
Even if organizations fill common AI leadership gaps that cause mistrust and backlash, resistance to AI can still be strong enough that initiatives and tool adoption fail if left unaddressed.
"Leadership fails when they treat AI as a 'set it and forget it' efficiency tool rather than a transformation that requires new governance, new sandbox testing and constant human-in-the-loop validation," said Zatylny.
Over 40% of organizations cite concerns around trust, ethics and legal considerations as top barriers to AI implementation, according to a TEKsystems survey.
To give AI initiatives and integration the best chance of success, leaders should proactively build trust in ethical AI and help workers understand the initiative, its use and its effects.
"When people see benefits like gathering data for better decisions, faster insights and less manual burden, adoption will accelerate organically," said Ha Hoang, CIO of Commvault. "Ultimately, trust isn't built by declaring the system enterprise-ready. It's built by demonstrating that leadership is accountable for its outcomes."
To build -- or rebuild -- trust in AI initiatives, leaders can do the following:
We must restructure our workflows so that AI facilitates the data gathering, but a person is 100% accountable for the decision. You can't automate accountability.
Karlo ZatylnyCTO, Portnox
Frame the conversation. Tell workers why AI is being used and what its real business use is. Frame the initiative around what it's doing for the organization, such as optimized operations or improved productivity.
Track valuable metrics. Identify and track metrics for success that go beyond operational efficiency and cost savings to reinforce leadership's commitment to help employees rather than business margins. CIOs should track metrics that are employee- and customer-focused, such as sentiment scores, time saved and customer satisfaction rate.
Keep communication open. Create feedback loops for employees and customers. Cultivate a culture that actively invites and accepts feedback, so leadership can proactively address concerns and issues before they become widespread. However, leaders must go beyond just receiving feedback. Actively listening, responding and following up demonstrates that concerns are taken seriously, which can build trust and credibility.
Demonstrate restraint. Being intentional and disciplined about what to use and not use AI for can be a sign of mature leadership and build trust that leadership thinks intentionally and ethically about AI integration, especially in cases involving high-risk or sensitive data.
Establish human oversight and accountability. "To rebuild trust, you need to change the management strategy from, 'The AI did it,' to, 'The human expert validated it,'" Zatylny said. "We must restructure our workflows so that AI facilitates the data gathering, but a person is 100% accountable for the decision. You can't automate accountability."*
Make AI transparency a core business practice. "We build trust through radical transparency -- specifically, requiring AI to show its work through source citations and mandatory feedback loops," Zatylny said. "We actually track AI truthfulness as a metric. By categorizing AI responses as correct, mostly correct or incorrect, we show our teams that we aren't blindly following an algorithm; we are actively auditing it."
Treat AI as a change initiative instead of a tech deployment. Only 22% of organizations prioritize change management strategies as part of their transformation agenda, according to TEKsystems' survey. "CIOs can reposition AI as a governance and operating-model discipline, not just a technology capability," Hoang said. "AI mistrust isn't a signal to slow innovation. Rather, [it's] a signal that leadership maturity must keep pace with technological capability."
Prioritize observability. "Stand up AI observability at the workflow level, across copilots, agents and internal tools, before rewriting policy," said Rajesh Raman, CTO at Lanai. "You can't credibly talk about good and bad AI use without seeing the full portfolio."
Establish proactive ownership. "Governance should be defined before forward movement," Sharma said. "That includes setting clear expectations for oversight, review processes and acceptable use. When these mechanisms are introduced after an incident, they feel reactive rather than intentional."
"The companies that win here won't be the ones that kept AI bottled up the longest," Raman said. "They'll be the ones that saw clearly, concentrated AI on the highest‑yield workflows, and used that proof to scale with confidence rather than hope."
Alison Roller is a freelance writer with experience in tech, HR and marketing.