Getty Images/iStockphoto

Overcome roadblocks to GenAI adoption and unlock ROI

GenAI deployments can fall prey to unrealistic goals, misguided pilots, job loss fears, hidden costs and lack of trust. Governance and workforce readiness are keys to success.

Businesses often encounter a gap between their expectations of what generative AI initiatives can achieve and the reality of moving pilot projects into production deployments. Successful implementation of projects -- ranging from AI-powered chatbots with limited customization to advanced agents designed to autonomously handle complex tasks -- depends on more than IT teams evaluating cost-effective foundation models and infrastructure.

Before signing off on these AI initiatives, leaders must first identify the core business challenge and determine whether AI presents a real opportunity to address it. "It's basically classic business-case discipline. And a lot of companies don't do that," said Mark Beccue, principal analyst for AI at Omdia, a division of Informa TechTarget.

When AI pilots are scaled into production, they require investments in data preparation procedures; integration with legacy systems; security, governance and compliance processes; model optimization frameworks; and ongoing maintenance and performance monitoring to assess business and operational impact. Problems can arise in unexpected ways when a proof-of-concept system, tested on a subset of data and users, is deployed in a production environment.

AI initiatives that fail to deliver measurable outcomes are often undermined by lack of organizational readiness and undocumented or poorly defined business processes, rather than purely technical issues. A structured approach to change management and workforce enablement is critical to the successful adoption of AI across teams and departments. Among the questions to be answered include who can use the AI tool, what tasks are allowed or banned and what jobs will be affected?

While AI tools can increase staff efficiency and free up capacity, business executives need to be realistic about GenAI's capabilities and limitations. GenAI is most effective when applied to clearly defined use cases where the tool's purpose matches the operational requirements of the task. Focus should be on reactive, prompt-driven systems designed for single-interaction content generation and output.

In contrast to GenAI, agentic AI can take actions or next steps on its own without being prompted, plan and orchestrate autonomous actions and use agentic agents to execute multistage tasks. These systems often overlap with GenAI and are built on top of the same underlying generative models.

"At the bare minimum, if you embrace agentic in AI, assume that you've now hired a thousand new college graduates," said Kyle McNabb, vice president and principal at consultancy Hackett Group. "They're all smart -- some may even have advanced degrees -- but they don't know your business. What would you do with new hires? You'd train them, set policies, oversee their work and manage them. And the same thing is true with AI."

AI can reshape how work is performed, but it can't guarantee accuracy or understand business context without appropriate training data and human oversight. Production deployments must include human-in-the-loop validation, prompt and data governance, and controls for data leakage, bias and hallucinations.

Graphic showing the benefits of GenAI deployments.
Successful GenAI deployments should address a specific business function.

Barriers to GenAI adoption

There are several roadblocks to AI adoption, including job loss fears, a lack of trust in AI outputs and pilots driven by technology only. As companies transition from test-scale pilots to production environments, AI-enabled technologies are integrated into workflows and core business processes. At this stage, businesses face a distinct set of challenges: data quality and management gaps, misaligned use cases, workforce readiness issues, governance and compliance complexities, and the inherent unpredictability of AI systems.

Lack of strategic vision and clear use cases

In many industries, GenAI adoption represents a business transformation that requires clear strategies for people, process and risk -- without overpromising results.

Some businesses get entangled in a "use case trap" by focusing on isolated deployments without the strategic vision for a scalable, reusable architecture, warned Frederic Giron, vice president and senior research director at Forrester. Giron raised the issue in a blog post on barriers to AI adoption published after the company's APAC Summit in September 2025.

AI initiatives must address core business problems; enhance customer, employee and partner experiences; and improve the workflows and functionality businesses rely on for information-oriented tasks.

Weak data foundations and poor data management

Fragments of legacy data, poor data management and a shortage of high-quality data can stop AI initiatives in their tracks. Without reliable high-quality data, businesses can't accurately measure the ROI of AI initiatives, making investment decisions riskier and adoption slower.

In its "C-Suite Outlook 2026: Uncertainty and Opportunity" survey, nonpartisan think tank The Conference Board reported that global leaders identified strengthening data foundations and improving the quality and quantity of data to measure AI ROI as the top priority in 2026. Among respondents, 48% of CEOs in North America concurred, compared with 25% each in Europe and Asia. CTOs (45%) and chief HR officers (88%) similarly emphasized the importance of ensuring sufficient, reliable data.

AI-ready data platforms and mature data management practices are key factors in successfully moving AI initiatives beyond the pilot stage.

GenAI fears and skepticism

While leadership tends to issue vague communications about how GenAI will affect human capital and alter existing roles, employees are on high alert regarding what AI really means for their jobs. AI systems require ongoing human oversight, which often increases employee workloads because outputs, such as code and content, must be continuously monitored for accuracy, hallucinations and potential bias. Disengagement can arise from change fatigue as processes and expectations continue to evolve without clear benefits.

Neglecting change management

Without organizational readiness, AI initiatives often stall. Businesses must prioritize change management from the start by capturing and redesigning workflows during ideation and proof-of-concept (PoC), assigning accountability for AI-enabled technologies, ensuring cross-team alignment and preparing employees for adoption. Successful implementation goes beyond adopting AI-driven technologies; it requires early attention to employee readiness, reinforced by clear leadership communication about process changes and tangible benefits.

A common problem is that the PoC will succeed technically, but it doesn't capture the essence of the work and how it's supposed to get done, Hackett Group's McNabb noted. "In some cases, it's a bit of an organ rejection -- the workforce is rejecting the solution," he explained. "That's usually a change management exercise. Maybe the right subject matter experts or domain experts weren't engaged with it."

Gaps in workforce readiness

Leaders need to ensure they have strategies in place to assess talent, redesign roles, and promote skills development and AI literacy.

Some AI initiatives get derailed by what Forrester's Giron called the "middle management bottleneck." Managers' concerns about accountability, job security and ROI often slow projects or bring them to a halt. Several organizations have made progress by directly addressing those fears. At consultancy KPMG, certification programs have equipped managers with AI skills, while mobile network Telstra has launched an internal AI academy with personalized learning paths -- efforts that, Giron said, have empowered managers and accelerated adoption.

Graphic listing GenAI deployment cost factors.
Numerous cost factors need to be accounted for in GenAI deployments.

Hidden costs of scaling AI

Some AI pilots focus on technical feasibility while overlooking the organizational realities of production deployments. What will this AI implementation cost over the long term? What if usage exceeds initial budgets? Businesses must consider contract obligations and plan for financial commitments they might not have accounted for during the pilot.

Hidden costs can include AI talent requirements, workforce retraining, and more data engineering and cleanup as businesses move from the small data sets of pilots to continuous data pipelines in production. Scalable compute (often GPUs), storage for training data and logs, and backup requirements can also drive up costs, particularly on cloud platforms.

Blind spots in AI data privacy and security controls

A key concern for many CIOs is how the data strategy intersects with regulatory and privacy requirements. How is sensitive data masked and filtered? Could confidential information be exposed in prompts or outputs?

The risk profile changes once AI pilots connect to live enterprise systems, such as HR, finance and CRM, and unresolved gaps in data masking, access controls and consent management can stall deployment. Without production-ready security controls and governance, pilots often fail to scale. As AI becomes more deeply embedded in core infrastructure, businesses need clear policies and technical safeguards covering bias mitigation, data privacy and responsible use.

Lack of clear governance and oversight

Weak governance, not weak technology, often blocks the transition from AI pilots to production. Business leaders need to ensure safeguards are in place so the AI systems' behavior and decision-making are used responsibly. Policies and operational controls must define how the AI-enabled technologies should be used, who is accountable for its behavior, and how risk, misuse and bias errors are managed -- especially as AI expands beyond content generation into more autonomous actions.

This oversight includes guidelines for approved tools, acceptable and prohibited use cases, human-in-the-loop rules and escalation paths, model risk management and approval processes, and mechanisms for accountability and auditability if failures occur. Agentic AI can introduce broader operational risks beyond GenAI, so governance frameworks increasingly separate the two technologies.

Treating AI like normal software

Many businesses encounter problems when they fail to recognize that AI implementations differ from the traditional software lifecycle of build-test-deploy. GenAI requires different skills for technology and security teams because, unlike most software, AI is nondeterministic: The same input can produce different outputs each time, particularly as AI models are updated and retrained over time. This variability changes validation and quality assurance practices.

Technology teams must design testing, monitoring and governance frameworks that account for this unpredictability and ensure they have expertise in data engineering, model training and prompt engineering. Security teams need to manage risks, such as data poisoning, during machine learning training, prompt injection and data privacy concerns.

"When organizations introduce AI, who really knows how it will evolve, what its lifecycle will look like or how to manage it?" said Omdia's Beccue. "That only comes with experience. The companies I've seen do this well are those that have invested in the technology and committed to it for more than five years. They have made a long-term commitment to AI and said, 'We have to learn this.'"

Graphic listing 12 steps in a successful GenAI project.
Realistic goals, scalability and governance are essential in overcoming GenAI roadblocks.

Remedies to GenAI roadblocks

AI's true business value lies in tangible use cases and measurable ROI, including cost savings, revenue impact, productivity gains and process improvements. Yet, even with positive ROI reports, business leaders remain concerned about the human side of adoption.

Wharton's 2025 report "GenAI Fast Tracks into the Enterprise" found that 89% of business leaders see GenAI tools as augmenting work, while 43% worry that employees could lose skills proficiency without clear role definitions, coaching support and time to practice. These concerns underscore that realizing AI's value requires more than financial returns, it demands organizational readiness, structured governance and sustained workforce investment to scale responsibly.

To improve the odds of success, business leaders should give employees time to experiment with the technology, create incentives for adoption and establish metrics tied to AI use and business outcomes. Large companies, such as Accenture, Amazon, Google, Microsoft, Salesforce and Shopify, are beginning to track employees' use of GenAI-enabled code assistants and other tools. In some roles, including senior management, using these tools is no longer optional; it's increasingly tied to career progression and considered part of expected job performance.

I've seen countless examples over the past year where a PoC worked, but it was built around the needs of one or two people.
Kyle McNabbVice president and principal, Hackett Group

Even with strong adoption and workforce engagement, the AI pilot programs that successfully scale vary, depending on how success is defined, whether through measurable ROI or broader business value. MIT's report "The GenAI Divide: The State of AI in Business 2025" found that only 5% of integrated AI pilots produced millions of dollars in value, with 95% failing to generate measurable profit-and-loss results. That finding has prompted some companies to avoid using the term "pilot" in earnings calls, as first reported by the Wall Street Journal.

Even with a successful PoC, McNabb said, businesses should pause before moving to production and ask: Do we truly understand the process and the work? "I've seen countless examples over the past year where a PoC worked," he explained, "but it was built around the needs of one or two people who thought 'this would help me.' There was no real context for the broader process."

Businesses often assume that if it works for a few individuals, then it can scale. But later, they discover that's not how work gets done. Instead of improving efficiency, the tool can create added overhead and headaches.

"The biggest remedy," McNabb advised, "is to step back -- even with a successful PoC -- and ask, 'Do we understand the full workflow? Where does this fit? And what impact will it have on what's happening upstream and downstream on the network?'"

Kathleen Richards is a freelance journalist and industry veteran. She's a former features editor for TechTarget's Information Security magazine.

Next Steps

Battle of the bots: Best GenAI chatbots for business

GenAI's role in a return trek to the moon and beyond

Is GenAI villain and hero in data center power drama?

The future of generative AI: Trends to follow

History of generative AI innovations spans 9 decades

Dig Deeper on AI business strategies