
tippapatt - stock.adobe.com
Health IT pros talk AI transparency in the patient experience
As AI permeates more patient experiences, healthcare experts say AI transparency will be key to ensuring patient trust.
To disclose or not to disclose -- that's the question. As more healthcare providers leverage artificial intelligence in various parts of the patient experience, they must answer questions about whether AI transparency will help build patient trust.
At present, there are no laws requiring disclosure of AI use in certain patient interactions, such as AI patient portal messaging or ambient documentation. That's leaving healthcare leaders weighing the pros and cons of AI transparency during the patient encounter.
On the one hand, patient trust in AI is still subpar, and disclosing AI use could create mistrust when a provider is ultimately the sole arbiter of clinical decision-making.
But in an era where patient trust is essential -- and is potentially slipping away -- AI transparency might be the way to go. We talked with health IT experts who outline how letting patients in on how the key technologies used in their care helps build an overall culture of trust.
Disclosing AI in patient portal messages builds trust
Fully disclosing that AI is playing some role in the patient experience -- from patient portal messaging to clinical documentation to clinical decision-making -- is likely the path of least resistance, according to Bill Fera, M.D., a principal at Deloitte.
"If you declare upfront, then there's no ambiguity about who somebody is dealing with or what somebody is dealing with in terms of the messages themselves," he said in a phone interview.
Indeed, clearing ambiguity is essential to building patient trust in the technology, which research has shown is middling at best. In February 2024, an athenahealth/Dynata poll showed that patients could trust AI depending on how it's used, but 43% said they don't know exactly what it's being used for.
Other surveys show a bleaker outlook. A 2025 JAMA Network Open report showed that patients are leery of their doctors using AI for administrative, diagnostic or therapeutic purposes. A separate study in the same journal showed patients are doubtful of their health systems' ability to use AI responsibly.
In other words, patient trust in AI is not a given.
According to Fera, this means healthcare professionals need to do everything they can, including being completely transparent in its use, to get the trust of the healthcare consumer.
"I am fully on the side of 100% disclosure," he asserted. "Any interaction with a patient should start with disclosure. 'I am a generative AI agent,' or 'I am an artificial intelligence part of the team' -- however you want to frame it. There has to be, I believe, declaration of that from the very beginning as part of a trustworthy framework. Otherwise, I think we could start to see broad mistrust of the technology."
That said, there are still questions about the extent to which clinical interactions truly hinge on AI. With best practice being for providers to review anything related to AI before it reaches the patient, are these interactions really AI, or are they the doctor's work?
How much do AI bots truly pen patient portal messages?
Importantly, there's the question of the extent to which AI actually authors patient portal messages.
According to Tom Gillette, the chief information officer at Mount Sinai Medical Center, it's not actually that much. In fact, the medical center, which just launched its own Spanish language version of Epic Systems' Art patient portal AI, considers all AI patient portal messages a draft.
"At the end of the day, it's a draft message," Gillette said in a previous interview. "The AI doesn't take away the decision-making of the clinician. It doesn't take away the voice and tone and personality of the physician. It provides a draft that is then edited by the doctor and sent."
According to Gillette, the medical center's patient portal AI simply provides a "starter response" that clinicians can choose to use, but clinicians must review every message. Nothing gets sent out automatically and nothing goes out unseen, he stressed.
"There's really no disclosure per se, because at the end of the day, doctors are writing and sending their own note just as they would before," Gillette noted.
The difference is, providers now have some language to get them started on the message, helping to streamline the process. In many cases, physicians see the AI-drafted response and delete it entirely.
Fera said he's seen this happen, too.
"A lot of clinicians who I've talked to will say, 'by the time I'm done editing it, I might as well have just written it myself,'" Fera said. AI disclosure is less an emotional response or distrust of the tool, he indicated, but rather a belief that healthcare should be transparent about how it uses all technology.
Indeed, that push to be on the side of transparency is critical regardless of AI use case. It's not just disclosing an AI-drafted or generated patient portal message. It's about ensuring patients understand health technology, including AI, and its proposed benefits.
For ambient AI use, the utilization and impact are obvious
According to Chrissy Daniels, chief experience officer at Press Ganey, healthcare should strive for 100% trust.
That means being transparent about the technologies utilized during a clinical encounter, no matter how discrete they might be.
For example, ambient documentation or listening tools are pretty unobtrusive and seemingly have little impact on the patient experience. These tools run in the background using voice recognition technology to listen during a clinical encounter and summarize and document the visit.
But despite the seamless integration of ambient documentation tools, those are complex functions. Providers shouldn't assume patients know how they work or why they are beneficial.
"If we are using AI, and we're having a lot of clients pilot ambient listening, being able to ask permission first and then clarify the value points is important. Not talking about it assumes we're all on the same playing field," in terms of understanding health IT, she said. "I also like this idea of being proactively trust-earning."
Healthcare providers should try telling the patient that they are using ambient documentation tools that day and explain that the system frees them from having to document notes in real-time. Clinicians might also encourage patients to read the AI-generated summaries after the encounter.
Still, this is more of an exercise of trust than anything else, Daniels acknowledged. Using ambient documentation becomes pretty obvious when the provider isn't furiously typing throughout the entire encounter.
"We have a whole generation of patients who only know providers who are documenting as they speak," she pointed out. "It's pretty obvious to the patient that you're not documenting."
But as our experts have indicated, disclosure of AI use is not necessarily about the technology itself. It's about a proactive relationship with patients built on trust.
"Change in healthcare for a patient and inconsistency is not good when it's unexplained," Daniels said. "It's not hard to disclose. And what we've heard from providers is that the response is almost universally positive. And in some cases, there's the opportunity for education for those patients who have questions or concerns. It's actually great for rapport building. Healthcare is a space where being trustworthy is a great idea."
Sara Heath has covered news related to patient engagement and health equity since 2015.