How CIOs can beat AI challenges: A top researcher's view

CIOs are grappling with moving AI from the pilot stage to genuine implementation, and many are encountering organizational pitfalls that are stalling the delivery of real value.

The potential of AI transformation is often hindered by a lack of understanding of its requirements, leading some businesses to encounter roadblocks. While CIOs often know why an AI project is getting stuck, the lack of understanding of what is really needed appears at the CEO and board level.

In this Q&A with TechTarget, Fern Halper, Ph.D. -- founder of the AI Foundations Group; vice president of research at TDWI; and author of Data Makes the World Go 'Round: The Data, Tech, and Trust Behind AI Success -- discusses how CIOs can overcome common organizational and readiness challenges to help drive AI success. 

For a taste of Data Makes the World Go 'Round: The Data, Tech, and Trust Behind AI Success, read an excerpt after the Q&A where Halper explains how CIOs and other executives can avoid getting caught in the common trap of AI vision versus execution.

Editor's note: This Q&A has been edited for clarity and conciseness. 

What is the most apparent issue causing CIOs headaches when it comes to AI implementation?

Fern Halper, Ph.D.: There are a few, which I can get into later, but one stands out, and it's a lack of organizational readiness. The issue is that while many CIOs understand that strong data foundations and infrastructure are essential for AI success, they're often under pressure from executives and boards to implement AI quickly.

Fern Halper Ph.D. -- founder of the AI Foundations Group, and vice president of research at TDWIFern Halper Ph.D.

That pressure has only intensified with the rise of generative AI tools, which appear easy to deploy. CEOs and boards witnessed the rise of generative AI and off-the-shelf consumer tools in 2024, and now they're expecting immediate, enterprise-scale implementation. That's creating real tension.

This tension is leading many AI initiatives to stall in pilot phases, leaving executives frustrated. Yet these executives often fail to realize that foundational work is required to scale AI successfully. This disconnect is leading to unrealistic expectations.

Some CEOs have realized this, such as the former Coca-Cola CEO James Quincey and former Walmart CEO Doug McMillon, who have both said that AI was a reason for their departures.

How else has a lack of organizational readiness caused expectation headaches for CIOs?

Halper: Before generative AI, organizations were gradually adopting machine learning. But with generative AI, many companies skipped the learning curve associated with traditional AI, such as data governance, model operationalization and infrastructure. Instead, pressure from the top saw many businesses jump straight into deploying applications without the necessary groundwork.

Now, many businesses are seeing their AI projects get stuck in what I call the ‘pilot graveyard’ and are reassessing and recognizing the need to build solid foundations before scaling further.

What are these foundations?

Halper: My research, which I write about in my book, revealed that there are five pillars that CIOs and executives need to focus on to achieve AI success:

These pillars are not independent -- they are deeply interconnected. If one is weak, it affects everything else. I have seen that AI will amplify and magnify whatever weaknesses or strengths already exist in your organization. For example, weak data foundations lead to poor outputs and hallucinations; inadequate governance increases regulatory and reputational risks; and poor operational processes prevent scaling beyond pilots

Conversely, strong foundations enable AI to deliver significant value.

In short, AI doesn't fix problems -- it exposes them.

Can you go into more detail about the other pillars, starting with data readiness?

Halper: Data readiness really underpins everything. Organizations are quickly hitting a value ceiling when using off-the-shelf AI tools without integrating their own data. For example, building a customer-facing AI copilot requires a deep understanding of customer data, and only a solid data foundation can provide this. This is true regardless of company size. It may be simpler in a smaller organization, but the same foundation still needs to exist.

One area I think organizations are only just starting to grapple with is unstructured data. Generative and agentic AI rely on unstructured data far more than traditional AI did. However, organizations don't trust their unstructured data as much as they trust their structured data, because they've spent years building quality controls around the latter. New metrics will be needed, and that work is only just beginning.

How about operational readiness? Why is this where a lot of organizations seem to fall down?

Halper: It's not enough to build models. You must deploy them, integrate them, monitor them and manage them in production. That's where AI creates real business value, not in the lab.

I see so many organizations stuck in pilots because they haven't thought through their operational readiness. Have they documented and versioned their models? Do they have model repositories? Are they tracking model decay?

These are the things that determine whether AI moves from experimentation into production. And without the data foundations underneath, the pilots often don't even get that far.

What about skills and tools readiness? How should CIOs think about talent and AI literacy?

Halper: It's not just about hiring data scientists, which some CIOs get stuck on. For agentic AI in particular, you need AI engineers and developers who can think in terms of systems to build applications. They can understand how data flows through interconnected components, avoiding duplication, and managing the whole rather than individual parts. Data engineers can also manage pipelines and infrastructure.

There's also a growing need for machine learning ops skills, and CIOs should be aware of this. These skills cover the operational side of putting models into production, monitoring them, tracking decay, and explaining decisions.

Two years ago, most organizations weren't thinking about any of that. They were focused on building the model, not on what happens once it's deployed. With governance regulations now in place, that's no longer optional.

On the subject of governance and readiness, how can CIOs put their best foot forward?

Halper: Governance readiness is improving, but there's more to do. I am seeing businesses thinking more about responsible AI given the increased prominence of generative and agentic AI. That's encouraging. But thinking about it and acting on it are different things.

What I am seeing is organizations starting to use governance as an enabler rather than a blocker, and that's an important shift. High-quality, well-cataloged, trustworthy data builds the kind of foundation that allows you to move faster. The organizations that are getting this right are those where IT and business teams are collaborating early, because they understand the risks are too high not to.

I am seeing many CIOs and data leaders prioritizing responsible AI, with a significant portion of organizations actively addressing ethical considerations such as transparency, fairness and compliance. However, executive-level understanding can lag.

Unstructured data is a particular challenge. While organizations trust their structured data, generative and agentic AI rely heavily on unstructured sources, where quality and control are often weaker.

Taking into account these five pillars, what is your most important takeaway for CIOs?

Halper: Don't think of AI as a tool. Instead, treat it as a set of enterprise capabilities that must be built and integrated across the organization. This includes data, governance, skills, operations and organizational culture.

CIOs who understand and communicate this are far more likely to move beyond experimentation and achieve real, scalable business value from AI.

The following is an excerpt from Chapter Two, "The Leadership Challenge:"

A compelling vision for AI can inspire teams, secure funding, and set a clear direction. But vision alone doesn't deliver outcomes; execution does. Too often, executives announce bold visions for AI only to watch them slow in the face of data silos, cultural resistance, or unclear priorities. This chapter examines why execution is harder than vision, explores high-profile failures and draws out the strategic principles that enable leaders to translate aspiration into measurable results.

The AI Trap: Vision vs. Execution

"We see artificial intelligence as a key enabler of innovation and efficiency across our organization. By embracing AI, we aim to enhance customer experiences, streamline operations, and stay competitive in a rapidly evolving marketplace." Does this sound familiar? Have you heard a statement something like this or perhaps said it yourself? These are aspirational vision statements that may align with your organization's strategic goals, even if they don't discuss technical details. Vision involves setting a strategic direction and imagining what is possible. I've seen over the years that executive vision and support are critical for AI success.

In a recent 2025 TDWI survey, almost half of the respondents reported feeling pressure from their executives to implement AI.1 In many cases, organizations are responding to a top-down vision (which is often vague) that AI will drive productivity and innovation and that it will deliver competitive advantage. A vision can be a great thing. It can motivate employees if done properly, and it can set the tone and help to build the corporate culture.

Execution is an entirely different challenge. Execution requires translating that vision into concrete actions, integrating technologies, overcoming cultural resistance, managing data readiness, utilizing the technology correctly, and achieving measurable outcomes. While a lot of vision statements have been made by a lot of executives (and many since 2023 about AI and generative AI), most companies remain in the experimental phase of generative AI adoption, with few deploying solutions in production, and even fewer deploying solutions that utilize their own data. The reason is simple: execution is harder than vision. Execution requires sustained investment, cross-functional coordination, and collaboration. Modern AI systems are technically complex, relying on intricate algorithms, large and varied datasets, and the challenges of deploying and integrating them into existing production environments.

Vision and execution go hand in hand. Execution without vision risks becoming misaligned with business objectives. Teams may work hard, but without clear strategic direction, their efforts may not solve the right problems. Conversely, vision without execution delivers little value; without a strategic roadmap and effective implementation, even the best ideas remain unrealized. While having a compelling AI vision is important, success ultimately depends on effective execution.

Examples of Failed AI Implementations

Unfortunately, even well-intentioned implementations can fail due to incorrect assumptions, biased data, or a lack of oversight. Poor execution doesn't just lead to poor outcomes, it can result in serious ethical, legal, and reputational consequences. The following real-world examples illustrate what can go wrong when execution fails to uphold the standards set by the original vision.

The COMPAS System

One of the first high-profile failures of modern artificial intelligence was the COMPAS system (Correctional Offender Management Profiling for Alternative Sanctions), which was used in the United States to predict the likelihood of a defendant committing another crime. Developed to assist in criminal justice decision-making, COMPAS was intended to provide objective risk assessments. However, it quickly became controversial due to concerns about its accuracy and fairness. In particular, critics pointed out flaws in the algorithm's design and the lack of transparency in how risk scores were calculated.

In 2016, ProPublica published an investigative report that revealed significant racial bias in the COMPAS system. Their research found that Black defendants were far more likely than White defendants to be incorrectly classified as high risk for recidivism. Conversely, White defendants were more likely than Black defendants to be incorrectly labeled as low risk. These findings caused widespread criticism and raised serious ethical questions about the use of AI in judicial settings. The study, entitled How We Analyzed the COMPAS Recidivism Algorithm, became a landmark example of the potential for algorithmic bias and underscored the importance of transparency, accountability, and fairness in AI systems.2

The system is still used in some states because those who use it claim it is not the primary driver of decisions. Debate about its use continues.

Recruitment Systems

Another highly publicized AI failure occurred when Amazon first started to use AI to automate the recruitment of technical developers and engineers. The company made use of a decade worth of resumes tied to successful job outcomes to train its model. Since Amazon gets a lot of resumes, its idea was that the system could pick out the best possible candidates. However, the machine learning engineers who built the system did not realize at the time that the model was discriminating against women because it was trained on data that was male dominated. Why? Historically, people employed in these kinds of software developer and engineering jobs were male.

AI models can reflect and reinforce systemic biases in training data. By 2015 Amazon realized it had a problem and tried to get rid of the bias but ultimately retired the models in 2017.3

Credit Card Limits

In August 2019, Apple launched its credit card, and by November of that same year, people noticed that it routinely gave smaller credit limits to women (in some cases by a factor of 10-20 times). This caused an uproar on social media. Goldman Sachs (the credit card issuer and manager) responded on social media that gender is not taken into account when determining creditworthiness. This raised more questions and generated negative publicity, especially for Goldman Sachs.

Apple and Goldman Sachs were investigated by New York's Department of Financial Services.

Ultimately, the report4 did not produce evidence of deliberate or disparate impact discrimination but indicated deficiencies in customer service and transparency. The incident highlighted the need for explainability and oversight in AI-driven financial services... 

Citations in this excerpt:

1. TDWI, unpublished survey results, 2025.

2. Larson, J., Mattu, S., Kirchner, L. and Angwin, J. (2016). How we analyzed the COMPAS recidivism algorithm. ProPublica, 23 May. Available at: www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm [Accessed 8 June 2025].

3. Dastin, J. (2018). Insight -- Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Available at: https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/ [Accessed 8 June 2025].

4. New York State Department of Financial Services. (2021). Report on Apple Card. Available at: https://www.dfs.ny.gov/system/files/documents/2021/03/rpt_202103_apple_card_investigation.pdf [Accessed 10 June 2025].

Read the rest of the chapter in the book. Chapter text excerpted with permission from the publisher, Wiley, from Data Makes the World Go 'Round: The Data, Tech, and Trust Behind AI Success by Fern Halper. Copyright © 2026 by John Wiley & Sons, Inc. All rights reserved.

Harriet Jamieson is a senior manager of custom content and writer for the IT Strategy team at TechTarget.

Fern Halper, Ph.D., is founder of the AI Foundations Group, and vice president of research at TDWI.

Dig Deeper on CIO strategy