traffic_analyzer/DigitalVision V

Exploring industry efforts to guide health AI adoption, use

With little federal guidance on health AI, industry groups have taken it upon themselves to offer guidance and guardrails to support safe and responsible AI use in healthcare.

The rapid evolution of health AI technology is offering promising new pathways to enhance care delivery and mitigate administrative burden. However, health AI use can also be risky, with the tools' propensity toward bias, among other challenges. With the Trump administration's hands-off approach to AI regulation, healthcare stakeholders are working together to create guardrails and guidance to ensure AI utilization does not negatively impact patient care.

These industry-led efforts have crystallized in recent months. They include plans for new health AI accreditations and resources to guide AI development and deployment. According to Shawn Griffin, MD, president and CEO of nonprofit accreditation organization URAC, the pace of health AI innovation is spurring the acute need for consensus-driven guidelines.

"AI is moving incredibly fast…And I'm concerned about the guardrails," he said. "I'm a huge fan of technology. I'm seeing all these places that are using AI in new and innovative ways, but honestly, the usage is running faster than the rules."

To provide those missing guardrails, URAC is planning to launch Health Care AI Accreditation in the third quarter of 2025. Similarly, the Joint Commission and the Coalition for Health AI (CHAI) recently announced a partnership centered on creating resources, including a certification program and AI playbooks, to ensure safe and responsible AI use across the spectrum of organizations.

LAUNCHING HEALTH AI-SPECIFIC ACCREDITATION

With health AI adoption skyrocketing, the need for independent evaluation is more urgent than ever, particularly given the vacuum at the federal level.

Griffin noted that the Biden administration was starting to create a framework for federal oversight over health AI. However, the Trump administration has withdrawn from those efforts, prompting URAC to step in.

"Then, when those were removed, I'm like, 'We want to get in there. We want to continue to protect patients.' And we think that independent oversight from someone who doesn't have conflicts of interest is incredibly important," he said.

Accreditation and certifications from third-party entities enable healthcare provider organizations to demonstrate the quality of care they provide and prove they meet certain operational and regulatory standards.

To develop the accreditation, URAC first reviewed the various principles, recommendations and guidelines released by prominent health agencies, like the World Health Organization, industry collaboratives and academic medical centers. Next, URAC convened an advisory group of about 30 individuals with wide-ranging legal, ethical, clinical and technology expertise.

I'm a huge fan of technology. I'm seeing all these places that are using AI in new and innovative ways, but honestly, the usage is running faster than the rules.
Shawn Griffin, MDPresident and CEO, URAC

Through meetings with the advisory group, URAC discovered a need for two accreditation pathways -- one for healthcare providers and another for AI developers. The former will assess AI utilization in clinical workflows, focusing on patient safety, data protection and bias mitigation. The second will focus on transparency, model governance, usability and consumer protection within AI technology development.

The advisory group is currently developing the accreditation standards for both groups. Griffin noted that for healthcare providers, URAC is careful to keep these standards specific enough to ensure safe and responsible AI adoption but broad enough to allow providers of different sizes and resources to adhere to them.

"We don't say, 'You have to have 27 members on your clinical oversight committee, and they need to represent 14 different areas,'" Griffin explained. "We say, 'What is your clinical oversight committee? What do they do? What do they oversee?' Show us the meeting minutes of their last five meetings and tell me who's on that committee and how they're qualified."

On the vendor side, URAC is more concerned with model quality. The accreditation will focus on factors like the data used to train the AI model, how the vendor is mitigating bias and what precautions are being taken to prevent challenges like AI model drift and hallucinations.

URAC plans to update the accreditation to keep pace with AI technology innovation. Once the accreditation has launched and the first few organizations have gone through the process, the accreditation firm will seek their feedback and adjust the protocols as needed to ensure a seamless experience for participating organizations.

CO-DEVELOPING FLEXIBLE AI PLAYBOOKS AND GUIDELINES

The speed of health AI's evolution is one of the most challenging aspects of governing the technology. The slew of up-and-coming health AI use cases necessitates flexible guidelines that can be adjusted in accordance with innovation.

"AI is dynamic and changes," said Brian Anderson, MD, CEO of CHAI. "There are concepts like drift and performance degradation that you need to mitigate and monitor. And many, if not all, of the health systems that we've talked to that have procured AI are now struggling with: How do we govern AI in a way that is financially sustainable? How do we,  particularly if we're lower-resource health systems,  adopt and use AI tools in a responsible way that doesn't put our patients at risk a year from now because we can't monitor that tool.?"

These are some critical factors CHAI and the Joint Commission will consider as they collaborate on a series of AI governance resources. The first step in the collaboration involves creating an initial guidance document to be released in the coming weeks.

The initial guidance is in the early stages of development. Anderson shared that it will focus on the importance of health system AI governance committees responsible for model selection and deployment. It will also center on the necessity of certain technical capabilities to monitor model performance and the need for strong vendor-health system AI partnerships.

AI is dynamic and changes. There are concepts like drift and performance degradation that you need to mitigate and monitor.
Brian Anderson, MDCEO, CHAI

The document will provide the basis for the certification program framework and a series of AI playbooks. For the latter, the organizations plan to interview a wide range of healthcare stakeholders, including Joint Commission committee members, AI governance committee members within health systems and staff at federally qualified health centers. The first AI governance playbook will focus on technical best practices and be released in the next three months.

To advance health equity, CHAI plans to create different versions of its playbooks for health systems with varying resources. For example, some versions of the playbooks will be suitable for very well-resourced health systems, while others will be more relevant to lesser-resourced health systems, Anderson noted.

The playbooks will also be continually updated. Once the initial draft versions are published, they will be made available for feedback from the healthcare community and revisited regularly.

"These documents will be updated frequently so that they'll be relevant and they will be speaking specifically to the use cases that are front and center to our health systems," Anderson said.

As the industry moves forward with efforts to provide consensus-based guidelines, resources and guardrails for health AI, Griffin underscored that all healthcare facilities are navigating uncharted waters together. But partnerships to create and share consensus-driven resources will benefit the industry as a whole, ensuring that health AI is safe and effective for all.

"Don't make every organization reinvent the wheel," said Griffin. "So, what are the best practices? What are the best practices for development? What are the best practices for implementation? Yes, those need to be shared."

Anuja Vaidya has covered the healthcare industry since 2012. She currently covers the virtual healthcare landscape, including telehealth, remote patient monitoring and digital therapeutics.

Dig Deeper on Artificial intelligence in healthcare