your123 - stock.adobe.com

Accenture global health lead on scaling AI in healthcare with governance and intent

Responsible stewards of AI in healthcare prioritize measurable outcomes, robust governance and integration into existing workflows over speed of adoption, according to Accenture's Andy Truscott.

AI is transforming the healthcare landscape, promising cost efficiencies, streamlined workflows and improved patient outcomes. However, the path to successful AI adoption and maintenance is far from straightforward.  

In a recent interview with Healthtech Analytics, Andy Truscott, global health technology lead at Accenture, discussed the complexities of AI adoption in healthcare. Truscott, whose expertise includes IT organizational strategy and architecture planning, detailed how patients and providers perceive AI and the challenges health systems face in implementing and scaling AI tools. 

To Truscott, successful AI implementation in healthcare requires a thoughtful, measured approach grounded in governance and value, rather than succumbing to the race to adopt it as quickly as possible. 

This interview has been edited for clarity and brevity. 

HEALTHTECH ANALYTICS: It seems easy to get caught up in vendor rhetoric about how great AI is for healthcare, and in many ways, it is. But how does that match up to how clinicians and patients feel about AI in healthcare today? In your view, has AI implementation actually led to better patient care, increased trust and so on? 

ANDY TRUSCOTT: It's interesting. What is AI to a patient? Is it something tangible that I can touch? Is it something I feel? Is it something I directly experience? My experience of AI could be some kind of agentic experience -- it could be a digital human that I'm engaging with that triages me as I navigate the health system -- that's a bit more direct. How accepting are patients of that? Well, we find that patients tend to be quite accepting if they get better access to care and if they get a better experience of care.  

But then on the provider side, the actual healthcare professionals, whether they're physicians, nurses or ancillary professionals, how accepting are they? Well, that's at a very varied level right now.  

Now, let's be clear, we have workforce deficiencies inside healthcare. We don't have enough healthcare professionals. So, using agentic AI and other AI capabilities to augment how people can do more with less time is a great idea, but I'm still responsible, aren't I? And one of the things that came out of our focus groups was that there is no case law. There is no precedent for who's actually liable in the event of good or bad outcomes driven by agents. 

Without any kind of regulations or governance around that, not just in the U.S., but globally, the level of acceptance amongst clinical professionals, I think, is going to always be a little bit problematic because who takes accountability and responsibility for agents? So, I think patients and providers have very different views. Those are the different acceptance dimensions I'm seeing right now. But obviously, if I'm a vendor, I can solve all the world's ills with this technology, and that may or may not be true. 

AI adoption is undoubtedly increasing rapidly in healthcare. What key challenges have you seen health systems face when scaling AI tools? 

The barrier is not the availability of AI. It's organizational readiness to absorb it. The real execution gaps are workflow integration, not model performance. AI fails when it sits next to workflows instead of inside them. You can have the best mousetrap on the planet, but unless you've got the cheese, it really doesn't matter.  

Data fragmentation also remains a tax on AI value. Even with advanced models, fragmented data and inconsistent standards adoption can cap impact. 

There's also an issue of governance lagging behind deployment speed. Organizations can buy AI faster than they can define validation, monitoring, escalation and retirement processes, especially for agentic systems that evolve over time. I'm minded that we can build systems now by using AI to support system construction. We can get those solutions built to specification far quicker than the client business is ready to actually measure whether it's good or not.  

I've seen an awful lot of solutions running around looking for problems. I think there has to be value before scale. When I look at leading organizations, they're explicitly proving value in contained high-friction workflows. They establish clear KPIs, time saved, errors reduced and dollars recovered and then, only once they've done that, do they scale. And that's the opposite of a blanket rollout. It's a selective, evidence-driven expansion. 

With these challenges in mind, what are some actionable strategies that health systems can employ to ensure enterprise-wide AI readiness and track value? 

If I put myself in the CIO's seat, I should be shifting from pilots to platforms. When I look across organizations, there are myriad people and departments implementing AI as a point solution. You've got to consolidate them into fewer enterprise-grade AI platforms with shared governance, integration and combined metrics, so you're actually measuring success consistently.  

Define AI ownership and accountability. Assign business and clinical owners, not just IT sponsors, for every production AI capability. 

Invest in interoperability that AI can actually use -- FHIR APIs, data normalization -- these are things that you should have been doing for some time, but until the 21st Century Cures Act regulations came along, it really wasn't a big stick, and they could only go so far. But now with AI, you need to have that level of workflow integration that directly determines AI ROI.  

And it's very easy to say, 'Well, we'll let AI work out the data.' But that's really costly. And you find that out at the end of six months when you get your billing for your compute that you used -- it's really expensive to get AI to go and parse all your narrative case histories and work out what truth is for every patient. So, measure value relentlessly. You have to move from 'we've deployed AI' to 'hours saved, cost avoided, quality improved.' 

And that requires a whole operating model which people aren't necessarily geared up for right now. 

Considering the proliferation of AI tools on the market, all promising strong ROI, do you have any other words of wisdom for how healthcare organizations can cut through the noise and adopt AI in a thoughtful way that truly creates value?  

It's about a disciplined, intentional adoption tied to outcomes, risk and trust. AI is a capability, not a mandate. The question is no longer 'where can we use AI?' It should be, 'where does AI measurably improve a workflow we already care about?' 

And if that use case doesn't reduce friction, cost, risk or clinical burden, it shouldn't get deployed regardless of how advanced the model is.  

Governance is also crucial. As an industry, we often see governance as a set of constraints. We see it as a set of brakes. I don't see it that way. I think governance is an accelerator here. We don't govern AI to slow it down. We govern it so we can scale it safely. 

You have to get on board with that because without validation, monitoring and accountability, clinicians don't trust it, CISOs won't approve it and boards won't defend it. Governance is what turns AI from being a science project into an enterprise asset. 

So, the conversation shouldn't be about adopting AI everywhere. It's about deploying it deliberately where it demonstrably improves outcomes and can be governed with confidence. And the winners are not the ones who adopt AI fastest. They're the ones who adopt it wisely. 

Jill Hughes has covered health tech news since 2021.

Dig Deeper on Artificial intelligence in healthcare