yj - stock.adobe.com

Navigating the 'AI-powered' digital healthcare boom

AI-powered digital health tools are flooding the market, but providers must look beyond marketing hype and evaluate their vendors to select the right tools for their needs.

Once upon a time, it was rare to find digital health technology that claimed to be "AI-powered" or "AI-driven." However, times have changed. Today, virtually every digital health tool and program on the market boasts AI capabilities; so much so that differentiating between them is proving a challenging task.

While the AI surge offers exciting new possibilities for digital healthcare, healthcare providers are increasingly overwhelmed by competing tools, each claiming greater AI features than the next. To navigate the complex world of AI-driven technology and ensure they are getting the best bang for their buck, digital health leaders and purchasers must learn to distinguish between marketing jargon and true technological innovation. 

Understanding the nuances of AI capabilities

According to Anand Rao, PhD, distinguished service professor of applied data science and AI at Carnegie Mellon University's Heinz College, it is essential to recognize that AI is not a new concept. Rao, himself, has been working with AI since the mid-1980s. However, there is a difference between what he termed "old" AI and "new" AI.

"I would say old school, basic AI reasoning, symbolic reasoning, various types of reasoning existed in the 70s, 80s, 90s, and so on, and now they are coming back," he said. "But when they're coming, they're coming back with the LLMs."

Thus, some forms of AI and machine learning, such as predictive models and agent-based modeling, have been utilized in healthcare for a long time, but advances in generative AI (genAI) are promising a significant leap forward.

Sundar Subramanian, CEO of AI company Zyter/TruCare and a former head of strategy at PwC, noted that the excitement surrounding genAI is easy to understand.

"There's a reason for the excitement because in rules-based approaches, there are lots of combinations," he said. "And unless you can pre-code these rules, you can't get to solving different scenarios. But in generative approaches, especially multi-agent systems, it can autonomously decide."

These AI agents could "have a pretty significant impact on business" compared to pre-defined rules-based systems, he added.

Already, agentic AI is being integrated into healthcare, with use cases ranging from scheduling and appointment reminders to revenue cycle management.

This is part of what makes sifting through the marketing for "AI-powered" devices so challenging. The term AI is broad, encompassing both "old" and "new" AI, as well as solutions that combine the two.

"Is the solution an agent AI? Or some type of knowledge graph? Or is it just an LLM-type model? Or a multimodal model? Or is it a more traditional predictive model? And so on," Rao said. "That's sort of the spectrum of AI you have, and unfortunately, I think everyone calls everything AI-powered now."

Technically, any digital health tool utilizing any version of the above AI approaches could be considered "AI-powered." Thus, carefully assessing your organization's needs and selecting the AI solutions that could effectively address them is critical.

Strategies for evaluating & selecting AI-powered tools 

The first crucial step when selecting AI-powered digital health tools is to determine the specific outcome you aim to achieve. According to Subramanian, too often, healthcare leaders start with the question, "Where can I plug AI?" That is, they begin with the technology and reverse engineer the use case. Instead, they need to first identify the problem, decide on the solution and then consider how AI can help support that solution.

"[Without this step,] you're going to automate a broken process, and I see 95% of the people go down the wrong path because of this reason," he said. "But if you truly started to rethink what outcome you're driving and how you then reimagine that process using AI, you get to startlingly different results."

This will also help health systems avoid jumping at the next shiny object. Rao emphasized that not every problem requires a genAI solution.

"Just because we've got LLMs, we don't have to ditch all the things that are currently working," he noted.

Once the problem and solution have been identified, health systems can begin the evaluation process. A critical aspect of this process is asking the right questions.

Rao recommends asking vendors about the underlying AI technology powering their tools. Health systems must understand the AI method as well as whether the vendor has enhanced it by adding other features.

In addition to the underlying technology, health systems must understand how it operates. Rao suggests asking questions like, "Can I use it straight out of the box?" "Have you developed a retrieval-augmented generation (RAG) AI framework with specific data on top of the LLM?" "What kind of proprietary data did you use to fine-tune it for my particular health system?"

Subramanian echoed Rao, also underscoring the importance of requesting metrics and proof of performance from vendors. Some questions he recommends asking include: "What's the percent automation of the fallout beyond what's driven by the business rules that the vendor is achieving? How are AI agents coordinating with each other in conflict scenarios?"

"My big point is that people have to stop thinking about, 'Hey, how do I know I can embed AI in my processes?' And think about, 'How do I rethink the processes given what AI could do?' And I think very few companies are doing that," he said. "And so, when they don't ask those questions, they're having the wrong discussions with these vendors."

Another essential aspect of the evaluation process is looking beyond the marketing jargon touting AI capabilities.

Rao noted that most digital health companies have flashy websites with product features prominently highlighted, but it is essential to delve into more detailed explanations of the product. These include tutorials on how to use the tool, real-world case studies and whitepapers.

Continual assessment of AI-driven tools

Once an organization has selected and implemented its AI-based digital health tool, it must guard against risks associated with AI utilization.

Take model drift, for instance, which refers to an AI model's performance worsening over time.

"Let's say the model's accuracy is 90%, right? So I've trained the model, and I've deployed it, and before deploying, I did a check, and it's great, it's 90%," Rao explained. "Now, as time goes by, let's say the next quarter, it is showing 88%, not a big deal. Another quarter goes by, and now it's 85%. Now, how far can it drift before you need to take action? And the answer is, it depends."

Healthcare organizations must develop their own thresholds for model drift and continually evaluate their models to mitigate drifts as they occur.

Similarly, Rao recommends conducting tests on AI models regularly to identify potential biases and inaccuracies.

However, not all healthcare organizations have the resources to access the AI expertise needed for the above tasks. In addition to upskilling their own workforce, Subramanian recommends creating partnerships with advisory groups, universities and think tanks that can fill in the gaps.

As AI continues to be integrated into digital health tools, healthcare provider organizations face numerous decisions. But by understanding the various types of AI being utilized, assessing their organizations' needs and asking vendors the right questions, they can ensure that they are making effective and informed decisions.

Anuja Vaidya has covered the healthcare industry since 2012. She currently covers the virtual healthcare landscape, including telehealth, remote patient monitoring and digital therapeutics.  

Dig Deeper on Digital health apps