Health systems are employing numerous AI governance strategies to ensure safe and equitable technology use, including setting up steering committees and creating frameworks.
Health systems are eager to take advantage of AI technology's myriad benefits, including efficient care delivery, reduced clinician burden and personalized care. But given the technology's novelty and known risks, provider organizations are also recognizing the need for comprehensive AI governance efforts.
Developing AI governance structures is challenging, especially without concrete guidance from the federal government. The healthcare industry has responded to this vacuum by banding together to create guidelines and guardrails for AI development and deployment. Alongside these external efforts, healthcare provider organizations are establishing internal policies to ensure safe and responsible AI use across care settings.
A recent survey revealed that 59% of 351 participating organizations have established a role or office tasked with AI governance. Additionally, 75% of respondents reported having established policies that detail the permitted and prohibited uses of AI technology. The survey was sponsored by Pacific AI, a company that helps organizations build and deploy legally compliant AI systems.
These internal governance AI efforts vary across organizations in accordance with their specific needs. Health system leaders from Mass General Brigham and the University of Arkansas for Medical Sciences (UAMS) shared their governance efforts with Healthtech Analytics, emphasizing the need for central governance committees and subgroups, standardized frameworks and continuous evaluations.
SETTING UP AI GOVERNANCE COMMITTEES
AI governance committees are vital to health system efforts to ensure safe and standardized AI use across organizations.
UAMS has a university-wide AI governance committee, which Joseph Sanford, M.D., chief clinical informatics officer, describes as a think tank. The committee includes educational, research and clinical leaders from across the organization.
"[AI] is not baked in yet to our normal processes, and I don't know anybody, myself included, that would claim to be routinely comfortable with the assessment of any implementation," said Sanford, who is also director of the UAMS Institute for Digital Health & Innovation. "So extra impact deserves extra scrutiny. And from a perspective of how this technology in particular affects all of our learners and patients and practitioners, a multitude of perspectives is useful."
There was a natural coalescing around who should be in the central committee and its sub-groups, he added. In addition to C-suite leaders, like Sanford, the CIO and the CTO, the committees include individuals who are early adopters of AI technology and have expressed interest in being included in the governance efforts.
We're continually refining our strategy for responsible use guidelines and internal policies to govern the application of artificial intelligence and manage risk.
Jane MoranChief information and digital officer, Mass General Brigham
The broad AI governance committee meets quarterly, with additional ad hoc meetings as needed, while the committees that sit beneath them meet as frequently as once a week.
Mass General Brigham has a similar AI governance committee structure. Jane Moran, the health system's chief information and digital officer, described an AI steering committee that includes senior leaders from the digital, operations, quality and safety, compliance and finance teams, among others. This committee oversees AI use cases across the health system.
Numerous working groups, including standards and policy, health equity, and patient experience working groups, inform the steering committee's work. These groups conduct detailed evaluations of AI models and vendors to determine appropriate use cases.
Mass General Brigham's Digital Trust oversees the AI steering committee. The trust is a governing body in charge of all digital priorities. It includes two-thirds of the health system's senior management leaders, such as the CFO, COO, chief security officer, chief information officer and digital officer.
"We evaluate all of our AI use cases to align with our strategic priorities," Moran said. "We're continually refining our strategy for responsible use guidelines and internal policies to govern the application of artificial intelligence and manage risk."
ESTABLISHING AI GOVERNANCE FRAMEWORKS
Establishing frameworks to guide AI governance is critical to safe and responsible health AI use. These frameworks generally include guidelines for assessing AI models, vendor selection and implementation protocols.
For instance, Mass General Brigham has an evaluation methodology framework to assess model performance.
"I would describe it as safely maximizing the potential of AI technologies," Moran said.
A group in charge of evaluation uses the framework to evaluate the metrics and data used in all the AI models implemented in the health system. The methodology includes a small pilot, where clinicians can use the model and provide feedback.
Additionally, the health system is creating frameworks to ensure ethical AI utilization.
"AI must be fair, legitimate, honest, impartial," Moran underscored. "It must be explainable and transparent. It must be secure and safe. It must be accountable."
The health system closely manages its AI models, particularly large language models (LLMs), requiring review processes and clinician oversight over AI output, she added. Advancing ethical AI use also means the health system must partner with vendors that develop ethical tools, such as tools that do not display bias and can be used in an environmentally sustainable manner.
Vendor assessment is a cornerstone of UAMS' AI governance frameworks as well. Sanford explained that the framework includes questioning vendors about their data stewardship. UAMS asks for contractual guarantees that if they use a company's LLM, the health system's inputs would not be used to retrain the general model.
There is no specific sub-committee dedicated to vendor assessment at UAMS; instead, different divisions perform their own assessments using the overarching framework.
"This is largely because those assessments across vendors are not really apples-to-apples," Sanford said. "Comparing bias in chart abstraction and clinical interpretation from a decision support standpoint is a very different assessment than any kind of bias or reproducibility issue when it comes to billing and coding, even though, yes, it's all healthcare."
COMMUNICATING AI PROTOCOLS SYSTEMWIDE
Once AI tools, appropriate use cases and utilization protocols have been determined, health system AI governance leaders must relay them to frontline healthcare staff.
Sanford said UAMS AI policies are posted online and broadcast via email. The communications focus on the AI tools incorporated into the system and their approved uses. For example, the health system makes clear in these communications which AI-powered solutions can access protected health information (PHI) and which cannot.
You want your policy and your regulatory environment to be adaptable because the second you write a policy and you get it passed through committee, it's outdated.
Joseph Sanford, M.D.Chief clinical informatics officer, University of Arkansas for Medical Sciences
Further, UAMS encourages responsible AI use by making appropriate AI pathways frictionless. Sanford suggests making it easier and more seamless for staff to use health AI through secure and established channels to dissuade misuse.
In addition to communicating about new AI tools and protocols, health systems must offer adequate training. Moran emphasized that clinicians may be nervous about taking on tools they do not yet feel comfortable using, making standardized training essential. The Pacific AI survey reveals that 65% of organizations report conducting annual employee training on the safe development and use of AI systems.
As important as the training is explaining why the health system is relying on AI in certain areas. Talking through clinician concerns, allowing them to voice challenges of using AI on the frontlines and contextualizing the benefits of any new tool are critical to gaining staff buy-in.
"You start to talk about them," Moran said. "Our senior executive team frequently does what I call digital rounds -- I'm going out and talking to clinicians about the benefits of some of these technologies."
FREQUENT REASSESSMENTS
Both Moran and Sanford underlined that AI governance efforts are not cut and dry. AI governance leaders must remain agile and shift strategy as the technology continues to evolve.
Moran noted that AI governance requires constant learning and re-evaluation to ensure that the technology's evolution does not outpace the guidelines for safe and equitable implementation.
Sanford echoed this, adding, "You want your policy and your regulatory environment to be adaptable because the second you write a policy and you get it passed through committee, it's outdated. Someone has come up with something new and incredible. And so, while you don't necessarily want to be on the very edge of that curve, you've got to have a philosophy of rapid assessment and adaptability."
AI innovation shows no signs of slowing down, and it is incumbent on health systems to select the right technology for their needs, assess and monitor its performance and align their internal processes to effectively govern health AI.
Anuja Vaidya has covered the healthcare industry since 2012. She currently covers the virtual healthcare landscape, including telehealth, remote patient monitoring and digital therapeutics.