mast3r - stock.adobe.com
6 clinician training strategies for AI clinical decision support
Health system leaders share advice for creating consistent and effective clinician training programs to ensure responsible use of AI-based clinical decision support tools.
Amid the proliferation of health AI tools, it is up to healthcare systems to ensure their safe and effective adoption. Health system leaders must provide training and resources that ensure clinicians use these tools appropriately.
From generative AI (genAI) to predictive analytics, AI technology has become increasingly integrated into the clinician workflow. However, if clinicians do not receive adequate training, these tools can threaten patient safety and clinical outcomes.
Developing training programs requires leaders to combine age-old best practices for introducing new technology into the clinical care setting with new AI-specific training approaches, including addressing clinicians' concerns around AI-based clinical decision-making.
Here are six strategies for setting up effective training programs for AI clinical decision support:
Glean insights from pilots and low-risk deployments
Digital health pilots can provide a wealth of information about effective clinician training practices for AI-based clinical decision support technology.
At Mayo Clinic, early and iterative clinician involvement in technology pilots is vital to determining training protocols.
"Pilots are great opportunities for us to talk to our clinicians who are piloting the technology to say, if we were to roll this technology out more broadly, what type of training is best? Do we try to perhaps offer a quick worksheet, or do we need more hands-on training?" said Edwina Bhaskaran, chief clinical systems and informatics officer at Mayo Clinic, in an interview.
This early clinician input also helps the organization translate vendor materials into clinical practice concepts and match its training resources to clinician needs, especially as the technology is scaled systemwide.
NYU Langone Health also takes an early and iterative approach to clinician training. Paul Testa, M.D., the health system's chief health informatics officer, shared that before introducing any tool into the clinical environment, particularly AI-enabled ones, the health system runs them in the background first.
"We watch how it behaves in the background for months at a time," he said in an interview. "And then we leverage the expertise of different individuals, our decision support physicians, our physician informaticists, our health informaticists, and partner with our rapid A/B testing group, our nudge unit, and other capabilities that we have to look at the impact in the background."
Then, the health system runs alerts in real time to a subset of physicians. For example, if the tool is intended to support high-risk differential diagnoses in the emergency department, it is first tested with a small group of physician leaders who examine whether the alerts are accurate, timely and relevant, Testa said.
This process not only enables clinician leaders to ensure that the AI tool is safe and responsible, but it also provides them with months of data that informs clinician training. Understanding how the tool operates in the real-world clinical setting allows leaders to create programs that align training with workflow.
Identify the relevant users & physician champions
The changes that come with AI technology deployment can be overwhelming for clinicians. But having clinicians on the ground who are already using and championing these tools can help.
Colin Walsh, M.D., associate professor of biomedical informatics, medicine and psychiatry at Vanderbilt University Medical Center (VUMC), shared that clinician champions have been essential to the go-live process for AI-based clinical decision support tools.
"You put [the champions] right there when these things go live, so that if there are issues, people don't have to look around and struggle with who to ask. The person is sitting right there."
In some cases, the clinical decision support tool is hyper-specific to one unit or specialty, which means training needs to be targeted to the relevant users.
According to Testa, focusing training only on the clinicians who will be using the tool allows leaders to delve deeper into the 'why' behind the technology, which is essential for change management.
"We do really high-touch [training] with those specific clinicians about the explainability and the why and how those tools operate," he said.
Combine theoretical training with hands-on experience
Training for AI-based clinical decision support tools requires a mix of modalities, including learning modules and practical training. While learning modules are essential to provide a base-level understanding of the tool, hands-on experience is indispensable.
At VUMC, hands-on training is provided in simulated environments that closely resemble the real-world clinical setting; however, these environments do not use sensitive patient data, Walsh noted. In these simulated environments, clinicians can use the tool as they would in their clinical practice, getting a feel for the technology and how it will impact their workflow.
However, it is important to note that no matter how well-designed the simulated environment is, it cannot fully replicate the real-world setting.
"There is nothing like the real thing because simulation environments always include, essentially, we control the individual's attention better because they know why they're there," Walsh said. "They have an hour set aside. Their pager's not going off 10 times. And then once you get into the real environment, there's someone tapping on their shoulder, there's a person asking a question, they're trying to use the tool. And so, there's nothing quite like it, which is why the combination approach ends up being important."
Provide information on how the tool reasons/works
One of clinicians' primary concerns around AI-based tools in clinical care is transparency. Understanding how the AI models work, including the data they are trained on, can increase clinician trust and, thereby, tool adoption.
Testa noted that clinicians at NYU Langone Health were more likely to dismiss an AI tool's alert when they didn't understand how the AI came to its recommendation.
"They were just dismissed immediately, and clinicians went on using their own judgment," he said. "And we saw the dismissal rates actually drop, once we included in that first screen, the weights of the calculus of how that decision or how that nudge came about."
"The most impactful way to inform and encourage mastery of these tools is to be transparent at every turn [how] the models are reasoning," he continued.
Mitigate resistance to change
In today's technology-driven healthcare landscape, clinicians are continually being asked to adopt new tools. To encourage engagement with training underpinning that adoption, health systems must create programs that empathize with the clinician's perspective.
"It's important to understand the context of the pace of change that our clinicians are currently feeling, right?" Bhaskaran said. "And so, while we may be very focused on really innovative tools that are supporting clinical decisions, if you don't take a step back and appreciate the broader sense of the pace of change that individuals are facing, you kind of lose that portion of it."
Having empathy for that broader pace of change will help leaders create training programs that don't feel cumbersome, she added.
For instance, that empathetic lens led Mayo Clinic leaders to ensure that training is consistent, rather than one-off sessions. The health system has even launched a clinical systems communication newsletter that goes out on the same day every month, providing updates and resources.
Testa echoed the importance of consistent communication with the clinicians, adding that AI office hours can help support clinicians as they adjust to changes in their workflow.
Additionally, allowing clinicians to experiment with AI models is a major change agent. NYU Langone holds 'prompt-a-thons,' which are sessions where clinicians can prompt new AI models with blinded patient data so they can see how the tool works.
Assess tool adoption to inform training
Assessing tool adoption is critical to ensure the efficacy of training programs.
According to Walsh, the gold standard for evaluation is using a mix of quantitative and qualitative measurements.
"Before we turn anything on, understanding what quantitative metrics we hope are better after go-live compared to before is really important and often not done carefully," he said. "Qualitative measures matter a heck of a lot, too, which is that perception. What do people think about the tooling? How does that work? And then balancing that with, well, what actually happens to things that we can measure?"
Once these metrics are identified, leaders can link the impact of training to the metrics, providing a pathway to assessing the training itself. Walsh noted that leaders could use the tool adoption and perception metrics to determine if a particular type of training was effective for a certain tool, or on the flip side, whether some training modules were unnecessary.
Bhaskaran also underscored the importance of these assessments, noting that Mayo Clinic examines tool adoption at the 90-day mark. Leaders track both quantitative metrics, like repeat uses of the tool by a single clinician, and qualitative metrics, like asking clinicians about their experience with the tool during leadership rounds. In addition, leaders examine themes in IT tickets submitted by clinicians.
"So, if we see that we continuously have a ticket that's coming through where users don't necessarily have the amount of proficiency that we're looking for, we try to develop ongoing training to support those things that might be emerging," she said.
NYU Langone Health similarly uses quantitative and qualitative metrics to analyze tool adoption and its impact, but Testa further highlighted the importance of doing these assessments across geographical locations. For health systems that span states and patient populations, assessing AI tool adoption and use can give leaders insight into training adjustments that may be needed for clinicians caring for vulnerable sub-populations.
Offer ongoing support & refreshers
AI is evolving at a rapid pace, and as a result, training programs cannot remain static. However, Bhaskaran noted that while ongoing training is important, it shouldn't create barriers to workflow.
"It is hard because I think the number one thing that we are all fighting for is attention and time," she said. "So, we have to make sure that any ongoing support and refresher training bears that in mind. It needs to be [provided] at the point at which that learner needs it and how they need it."
For instance, short, minute-long peer-to-peer videos have proved helpful at Mayo Clinic, particularly when clinicians need to be informed of a small update or new feature. Additionally, the process of how and where to find resources has been streamlined, so clinicians don't need to go to multiple files or websites to find the information they need.
AI has the potential to ease clinician workflow and enhance patient care; however, effective training will ultimately determine whether these tools fulfil their promise or create new barriers to clinical care.
Anuja Vaidya has covered the healthcare industry since 2012. She currently covers the virtual healthcare landscape, including telehealth, remote patient monitoring and digital therapeutics.