Getty Images/iStockphoto

90% of patients re-check AI chatbot health info with other sources

Patients using AI chatbots for health verify information with their healthcare providers, online resources and reputable academic research and databases.

New data from Merck Manuals is giving some hope in the discourse around patients' use of AI chatbots for healthcare. According to a survey of more than 2,000 patients, 9 in 10 who use AI take steps to verify the information chatbots provide.

This is good news, as healthcare experts worry about the impact patient-facing AI chatbots can have, particularly in terms of peddling medical misinformation or otherwise giving poor advice to users.

According to Sandy Falk, M.D., editor-in-chief of Merck Manuals, AI holds a lot of promise for improving patient access to care and health information, helping fill in patient engagement holes the industry has long left open.

"Artificial intelligence is a powerful tool for accessing and organizing information across all sorts of tasks, and finding health information is no exception," Falk said in a press release about the survey. "The challenge is making sure the information is accurate and reliable and provided in the right context."

Overall, 62% of Americans have used AI tools to access medical information, the survey found.

A third of individuals use tools such as ChatGPT, Gemini or Siri to get information about a specific condition or disease. Another 29% use the tools to get information about symptoms they're experiencing, while 26% use AI chatbots for nutrition and lifestyle advice or to ask about medication side effects or dosage.

Patients are also sharing a lot about themselves with AI chatbots, the survey noted. For example, 54% of respondents have entered a list of symptoms to get a diagnosis from AI. Another 44% have entered health information, such as vital signs or medical history.

But although patients are engaging with these technologies, many view AI chatbots skeptically, the survey showed.

Patient trust in AI chatbots is not a given

According to the survey, 88% of users trust the information AI chatbots provide, but not completely.

Importantly, 90% of those using AI chatbots take measures to check the legitimacy of the information the tools provide. For example, 41% will talk with a healthcare professional to clarify and vet the information AI provided, while others will cross-reference it with other online platforms, such as Google (39%).

Meanwhile, 37% will check the sources the AI cited in its response, mostly verifying source authority and accuracy. Finally, 32% said they check academic or medical databases for further research.

This trend is promising, as it signals to healthcare providers that patients are open to discussing best practices for using AI chatbots in medical care. While the technology can help connect the dots for patients experiencing care access barriers, they should do so with caution, and it's up to providers to outline that for them.

For example, healthcare providers can ask their patients if they use AI chatbots for their health and how they do so. That can open a discussion about checking the validity of AI-generated information and determining when a patient should foremost consult a healthcare provider.

Sara Heath has reported news related to patient engagement and health equity since 2015.

Dig Deeper on Patient engagement technology