Getty Images/iStockphoto
How docs can manage patients consulting AI medical advice
Researchers recommend providers lead with empathy when patients bring in AI-generated medical advice and then move into shared decision-making.
What can providers do if their patient comes in with advice from a chatbot that goes against their professional advice? For starters, they should lead with empathy, according to a recent article outlining communication skills for discussing medical guidance from large language models and AI.
The article, published in the journal The Laryngoscope, also advised healthcare professionals to use AI-generated medical advice as an opportunity for shared decision-making.
AI chatbots powered by large language models (LLMs) have become nearly ubiquitous in today's society.
In healthcare, specifically, patients have come to leverage these tools to ask health-related questions, including medical advice for their own issues and concerns. According to one report from Zocdoc, a third of patients used tools like ChatGPT for healthcare advice once a week in 2025, while 1 in 10 used AI for healthcare advice very day.
While there is room to discuss the merits of these habits -- AI chatbots can offer outdated, inaccurate or biased advice -- this latest report completed by researchers at the University of California San Diego and the University of California Irvine looked at how providers can communicate with patients who've already used the bots.
"Answers generated by AI can sometimes be assertive in nature and thus can skew a patient's opinion, resulting in a discrepancy with the physician's evaluation and their diagnosis or recommended management plans," the researchers said.
In other words, AI chatbots can be wrong, but their messages are written to convincingly patients can come to strongly believe their findings. This can be daunting for patients and providers alike, who need to discuss conflicting health messaging while fostering a healthy and respectful patient-provider relationship.
How providers can counter inaccurate AI chatbot messages
The researchers conducted a literature review assessing how and why patients might use an AI bot like ChatGPT, how often the bots are and aren't correct and the efficacy of certain patient-provider communication strategies to mitigate discrepancies.
Notably, the researchers found that an "empathy first, knowledge second" approach would be key for meeting with patients who've consulted with Dr. ChatGPT. Providers should avoid admonishing or embarrassing their patients for turning to a chatbot and instead empathize with patient needs.
In many cases, patients are using AI and LLMs to learn more about their health because they have a significant need and are growing more concerned about it. For example, many patients use the tool to learn more about emerging procedures or certain tests they could take to pin down undiagnosed symptoms.
Even if the guidance the chatbot provides isn't applicable (such as if the patient doesn't qualify for the surgery or the test is not necessary), providers should start by empathizing with the patient's medical needs and build rapport.
It is important to give recognition and not to dismiss the patient for seeking out information to improve their condition. Acknowledging why the patient might want answers will validate their concerns about their health.
From there, providers should explain how there are often discrepancies between public-facing AI sources, such as ChatGPT, and the resources tailored for clinicians. Providers might even outline areas in which both sources have conflicted with each other previously or even where different medical professionals might disagree.
That can open the door to a discussion about the limitations of LLMs and chatbots. For instance, these algorithms can't account for patients' health histories, the physical exam, the resources available at a typical clinic and safety considerations.
Finally, the provider should view the interaction as an opportunity for shared decision-making and explain the benefits of the practice to the patient.
The patient has already brought their knowledge of their healthcare needs and preferences to the interaction by way of LLM search and open-ended questioning from the provider. In their role, clinicians can then explain the information provided by the LLM, outline the options that are realistically available and the pros and cons of each.
At the heart of these strategies is a human-centered experience.
Patients are looking for their providers to listen to them, take their needs seriously and see them as more than a diagnosis. By leading with empathy when patients bring in AI-generated health recommendations, providers can center their patients as humans first and open the door to more meaningful conversations.
Sara Heath has reported news related to patient engagement and health equity since 2015.