Getty Images/iStockphoto

Tip

5 questions every CIO should ask before investing in AI

AI investment is accelerating, but many initiatives fail to scale. Five key questions help CIOs evaluate use cases, data readiness, long-term value and risk before committing resources.

For technology leaders, the biggest challenge with AI adoption is deciding where to invest. As vendors roll out new tools and boards push for faster deployments, CIOs often end up making AI investment decisions without clear visibility into long-term returns.

According to Gartner, global AI spending is projected to reach $2.5 trillion in 2026, underscoring the scale of capital flowing into AI initiatives. Yet even with this surge, Gartner found that at least 50% of generative AI projects are abandoned after proof of concept, often due to unclear business value, data readiness issues or rising costs. The result is wasted spend, fragmented systems and growing pressure on CIOs to demonstrate measurable impact from AI investments.

Given these risks, the CIOs who are most successful with AI are not the ones who rush to adopt tools first. Instead, they're the ones who establish a clear decision-making framework upfront, before vendor pitches even begin. To help build that framework, here are five strategic questions that should guide every CIO's AI investment decisions.

1. What problems is the business trying to solve?

The most successful AI initiatives start with a clearly defined business outcome, not a technology mandate. Many organizations flip this process, however, adopting AI first and then searching for a use case.

Brandon Sammut, chief people and AI transformation officer at Zapier, a software company that provides business and workflow automation, said organizations often make the mistake of starting with the technology rather than the business problem they're trying to solve. "When you lead with 'how do we use AI?' the result is impressive demos that nobody actually uses in production," he added.

The first question should be: What problem are we trying to solve?
Gabriela Cubeiro, senior vice president of product, 8am

Gabriela Cubeiro, senior vice president of product at 8am, a software company that provides integrated workflow, payment and management tools, shared a similar view, emphasizing that AI initiatives should begin with clear business intent rather than experimentation. Instead of focusing on what AI can do, she argued that leaders should define the outcome first. "The first question should be: What problem are we trying to solve?" she said.

CIOs should therefore push their teams to clearly define the operational or financial problem they're trying to address. For example, a statement like "improve customer service with AI" is too vague to guide investment decisions. A clearer objective, such as reducing average call resolution time by 20%, gives the organization a measurable target and a more realistic way to evaluate whether AI is the right option.

In some cases, simpler process changes or traditional automation might deliver the same outcome with less complexity and risk, reinforcing the need to consider whether AI is truly necessary. Defining the business problem upfront ensures AI investments are tied to real value, rather than experimentation for its own sake.

2. Is the data foundation ready to support AI?

AI systems are only as reliable as the data they're trained on, yet many organizations overestimate their data readiness. When data is incomplete, inconsistent or poorly governed, the outputs can be inaccurate, biased or difficult to explain.

Before moving forward with an AI initiative, CIOs should start with the fundamentals. They should question whether the relevant data is accessible across systems or trapped in silos, whether it is structured and labeled appropriately to support the intended use case and whether clear governance policies define how it can be used. Privacy requirements, regulatory obligations and data ownership should also be clearly understood.

Strong data governance is especially critical when AI influences business decisions. Sumit Johar, CIO of BlackLine, a cloud-based fintech company, explained that if an AI system is influencing decisions that affect revenue, compliance or customer trust, the data feeding it must be accurate, timely and auditable. "Without disciplined data management, AI simply amplifies noise," he added.

Making AI work across your CRM, dashboards, support systems and data layer is the real challenge.
Brandon Sammut, chief people and AI transformation officer, Zapier

Beyond governance, organizations must also consider how AI tools connect to the broader technology environment. Integration challenges often expose additional data gaps. Zapier's Sammut noted that while building or purchasing an AI tool might seem straightforward, the real complexity lies in integrating it across existing enterprise systems -- something many teams underestimate until implementation is underway.

"Making AI work across your CRM, dashboards, support systems and data layer is the real challenge," he said. "Most teams don't plan for it until it's too late."

While organizations don't need perfect data to begin experimenting with AI, they do need an honest assessment of data quality and governance gaps. Many AI initiatives stall not because the models fail, but because the underlying data environment isn't strong enough to support them at scale.

3. Can the organization sustain this system long term?

AI isn't a one-and-done deployment. Models need to be monitored, retrained and adjusted as conditions change -- a concept known as model drift. Without ongoing oversight, systems that perform well at first can gradually lose accuracy or relevance.

CIOs need to think beyond launch day and consider whether the organization is equipped to support the system over time. Key aspects to consider include: Who will monitor model performance? Who retrains it when data patterns shift? What happens if a vendor relationship ends or the project's internal champion leaves?

Another important dimension of sustainability is architectural discipline. As AI tools proliferate, organizations risk fragmented deployments. BlackLine's Johar described this pattern as AI sprawl, where companies accumulate multiple AI tools without a clear long-term strategy.

"CIOs feel pressured to adopt new AI capabilities constantly, but they need to evaluate which of these tools truly fit from a long-term point of view," he said. Without intentional oversight, organizations can end up with overlapping investments, redundant tools and systems that are difficult to govern at scale.

Part of avoiding that outcome involves making deliberate build-versus-buy decisions. Johar emphasized that organizations should only develop AI systems internally when doing so strengthens a company's core competitive advantage. For everything else, he said, it often makes more sense to buy tools and keep them flexible so the organization can adapt as the AI landscape evolves.

This disciplined approach helps ensure AI initiatives remain adaptable rather than locked into rigid architectures. Organizations that fail to plan for ongoing stewardship often find themselves overly dependent on vendors or managing systems they no longer fully understand.

Long-term success depends not just on strong governance and performance monitoring, but also on clear ownership, flexibility and alignment with business priorities. Without that foundation, even well-designed AI systems can become difficult to sustain over time.

4. How will success be measured?

AI initiatives often struggle because success is defined too loosely. Goals such as "increase efficiency" or "drive innovation" might sound appealing, but they make it difficult to determine whether an investment is delivering results. Instead, CIOs should establish clear metrics and timelines before deployment. These might include reductions in operational costs, faster decision cycles, improvements in customer satisfaction or measurable productivity gains.

8am's Cubeiro emphasized that measuring value should go beyond financial returns alone. She noted that adoption and responsible use are critical indicators of early success, explaining that strong engagement with AI tools signals that teams understand how to apply them appropriately.

CIOs need to tie results directly to business priorities such as profitability, efficiency or customer experience.
Sumit Johar, CIO, BlackLine

BlackLine's Johar also cautioned against relying solely on usage metrics, as high adoption doesn't necessarily translate into business impact. Instead, clear alignment with enterprise strategy is essential. "CIOs need to tie results directly to business priorities such as profitability, efficiency or customer experience," he said.

For example, if improving customer experience is a top goal, success might be reflected in stronger service-level agreements or improved customer satisfaction scores. If profitability is the priority, the value might appear in the form of cost reductions or more efficient operations.

It's just as important to think about how the value created by AI will be used. For instance, if automation frees up hundreds of employee hours each month, those hours must be redirected toward higher-value work, such as innovation, analysis or customer engagement, to generate meaningful ROI.

Clear metrics and regular review points help organizations track progress, adjust strategy and decide whether an initiative should scale, pivot or be retired.

5. What risks does AI introduce and who is accountable for them?

AI projects carry a different risk profile than most traditional IT initiatives, touching everything from regulatory compliance and operational reliability to algorithmic bias and reputational impact.

As AI regulations evolve, organizations might need to meet requirements around transparency, explainability, bias testing and documenting automated decision-making. CIOs should therefore involve legal, compliance and risk leaders early in the process rather than after deployment.

These risks are becoming increasingly visible in practice. 8am's Cubeiro pointed to growing concern about AI hallucinations, particularly in the legal field, where fabricated citations and unsupported claims have led to sanctions as well as reputational and financial damage. "We're seeing serious compliance and operational risks when AI generates factually incorrect or unsupported content," she said, underscoring the need for verification and governance controls before scaling deployments.

To manage these risks more effectively, BlackLine's Johar suggested structured oversight mechanisms. "Organizations can benefit from creating two governance councils," he explained. "One focused on risk, bringing together legal, security and privacy leaders and another focused on transformation to evaluate whether new AI capabilities align with business goals or duplicate existing tools." Such oversight helps prevent fragmented initiatives and overlapping investments, especially as new AI tools enter the market rapidly.

Beyond structure, clear accountability is essential. If an AI system produces a harmful or inaccurate outcome, the organization must know who is responsible for oversight and escalation. Many enterprises address this through formal governance structures, such as cross-functional AI review boards or ethics committees.

Zapier's Sammut emphasized that governance must be embedded into AI programs from the start. "You can delegate some of the work to AI, but you can't delegate the accountability," he said.

Without defined ownership and oversight, even well-intentioned AI projects can create risks that are difficult to contain once systems are in production.

Kinza Yasar is a technical writer for Informa TechTarget's AI and Emerging Tech group and has a background in computer networking.

Next Steps

The dollars and sense of implementing AI

Good governance key to reducing high AI project failure rate

AI business use cases that produce measurable ROI

Dig Deeper on AI business strategies