Rasi Bhadramani/istock via Getty
42% of patients using AI for health don't follow up with a doctor
Health IT experts stress that AI chatbots should not replace the expertise of a clinician, but KFF data shows half of patients aren't visiting a provider after chatting with AI.
As healthcare AI chatbots continue to proliferate the consumer space, new data from KFF shows a startling new trend: 42% of patients using AI chatbots to learn about their physical health aren't following up with a healthcare professional afterward.
This comes as healthcare experts assess how patient-facing AI chatbots will shape the consumer experience. While some providers say that chatbots -- such as ChatGPT, Gemini or Claude -- can help make more engaged patients, others fear that patients will view the bots as a replacement for medical care rather than an enhancement.
This latest KFF survey of 1,343 adults gives credence to those concerns.
AI chatbots connect patients to care
Overall, AI chatbots are not the primary source of health information for most patients. When patients have questions about their physical health, most are consulting with a doctor (76%). Only 29% are using AI to get medical advice.
These numbers are a little different when examining AI use for mental health advice. Getting mental health advice from an AI chatbot is still uncommon, but that's because getting mental healthcare from any source is uncommon. Notably, only 39% consult with a healthcare professional for mental health advice.
The issue at hand is not the use of AI chatbots. These technologies are helping some traditionally marginalized patients get access to healthcare -- especially mental healthcare -- they'd otherwise go without.
Although the most common reason an individual used AI instead of visiting a doctor was to get immediate answers about their health (65% say this was a major reason), the KFF data shows that the technology is addressing common care access barriers.
For example, about a fifth (19%) of adults said a major reason they used AI was healthcare affordability, while this was a minor reason for 27% of adults. AI also proved more accessible than healthcare providers, with 18% saying limited appointment availability was a major reason for using AI, compared to 26% who said it was a minor reason.
The KFF data also corroborated what many experts say is the promise of AI chatbots: creating more informed and engaged patients. For example, 28% of adults said a major reason they used AI was to get an explanation of their test results before they could meet with a doctor.
Issues arise when patients aren't following up with a healthcare professional.
Half of patients using AI don't follow up with a provider
Most healthcare professionals say consumer use of healthcare AI can be a good thing. However, they also stress that the technology cannot replace a healthcare provider, meaning patients need to seek care when presenting pressing symptoms.
According to the KFF survey, this isn't always happening.
Among adults using AI chatbots to learn about their physical health, 58% followed up with a provider and 42% did not. The inverse was true for those using AI for mental health -- 42% followed up with a provider and 58% did not.
It was more common for younger, low-income adults to say they did not follow up with a healthcare professional after using AI.
There is a risk that consumer-facing AI could create inequities. Younger or low-income patients leveraging AI chatbots because healthcare is otherwise unaffordable will be disadvantaged as long as structural health system change remains elusive.
Patient trust in AI reliability is questionable
The survey also raised the question of trust and patient privacy.
Although most adults are satisfied with using AI chatbots for their physical or mental health, 67% aren't convinced the technology can give reliable information.
This shifts when looking just at adults who have actually used an AI chatbot for physical health information. Among this population, 69% have at least a fair amount of trust in chatbot reliability, compared to just 31% with little to no trust.
A similar trend emerges when looking at AI use for mental health.
Moreover, three-quarters (77%) of adults are at least somewhat concerned about privacy and security. Still, 41% of AI users said they've uploaded or shared test results or notes from a healthcare professional with a chatbot to get an explanation.
The availability of AI chatbots is extremely new, with little to no information about how these tools will impact the patient–provider relationship. But as more patients leverage the technology, it will be important for healthcare professionals to continue building trusting relationships with patients and supporting an open dialogue about AI.
In doing so, providers can flag when and how patients use AI to manage their health and encourage patients to follow up with their provider as concerning issues emerge.
Sara Heath has reported news related to patient engagement and health equity since 2015.