Getty Images

More patients use AI chatbots. Is this a patient safety risk?

With leading AI chatbots creating more tailored healthcare instances for patient users, leading industry groups fear hazards to patient safety.

In the first month of 2026 alone, there have been three of the tech world's biggest players launched patient-facing AI chatbots, promising answers to healthcare's biggest patient access problems. But according to ECRI, an authority on patient safety, misuse of those chatbots could amount to a significant threat to patient users themselves.

As part of its annual list of the biggest health technology hazards, ECRI stated that misuse of AI chatbots -- which can peddle false, misleading or biased information -- is the industry's biggest threat this year.

"Medicine is a fundamentally human endeavor. While chatbots are powerful tools, the algorithms cannot replace the expertise, education and experience of medical professionals," Marcus Schabacker, MD, PhD, president and chief executive officer of ECRI, said in a press release. "Realizing AI's promise while protecting people requires disciplined oversight, detailed guidelines and a clear-eyed understanding of AI’s limitations."

AI chatbots leverage large language models to support human-like conversations and "expert-sounding responses" to patient questions, ECRI explained. Systems like ChatGPT, Claude, Copilot, Gemini and Grok are among the most popular AI chatbots, but more healthcare-specific tools are also starting to come to market.

Although these chatbots can be helpful in easing patient care access problems and streamlining care coordination, there are numerous risks involved with using them.

For example, many of the tools are unregulated and the general version of the tools are not validated for healthcare use. AI chatbots can also peddle misinformation or misleading information, as well as further bias in medicine.

"AI models reflect the knowledge and beliefs on which they are trained, biases and all," Schabacker explained. "If healthcare stakeholders are not careful, AI could further entrench the disparities that many have worked for decades to eliminate from health systems."

AI chatbots becoming more popular in healthcare

ECRI's warning about misuse of AI chatbots comes as these technologies experience unprecedented popularity.

Earlier this year, OpenAI, the company that created ChatGPT, released a report saying the chatbot fields nearly 2 million messages regarding healthcare each week. Worldwide, about a quarter of ChatGPT's 800 million regular users asks a healthcare-related question.

This high utilization rate is likely the result of a healthcare system ill-equipped to meet patient needs.

Recent data from Sacred Heart University showed that 38% of respondents face long appointment wait times, while another 24% said it's hard to book an appointment at all. About a third (32%) of respondents are frustrated with their insurance and a quarter are worried about their healthcare finances.

Patients are turning to AI chatbots to fill in those gaps, with 41% saying they'd be willing to use AI for personalized health reminders, 39.6% saying they'd use it for automated appointment scheduling and 36.5% saying they'd use AI to help them read their test results.

Technology companies are meeting the moment on AI

With a significant market opening for healthcare-specific AI, technology companies are staking their claim.

Just days after releasing their utilization data, OpenAI announced the launch of ChatGPT Health, which lets patients connect their own health records to the chat to get personalized answers to their medical queries. OpenAI also outlined potential for the tool to help patients with care coordination and navigation.

Next, Anthropic released Claude for Healthcare, which includes a suite of both clinician- and patient-facing tools. Specifically, Anthropic said the platform can summarize a user's medical history, explain test results, detect patterns in fitness and health metrics and help prepare questions for medical appointments.

Critically, neither OpenAI nor Anthropic specifically said their tools are HIPAA-compliant. Anthropic said Claude for Healthcare is "HIPAA-ready" and OpenAI outlined key safeguards it says will protect users' information. Such safeguards include not training models on ChatGPT Health interactions and not selling user data.

Most recently, Amazon One Medical threw its hat into the ring with its Health AI assistant, which the company said can use a patient's medical record to give personalized answers and recommendations. The Health IT assistant is unique in that it is HIPAA-compliant and automatically connects patients to clinical care if they display urgent symptoms.

Technology companies such as OpenAI, Anthropic and Amazon are entering a new world of consumer-facing AI.

But the consequences of AI chatbots for healthcare are different than for most other use cases. With patient privacy and patient safety on the line, technology companies and the providers who care for patients will need to understand how best to manage and guide patient use.

Sara Heath has reported news related to patient engagement and health equity since 2015.

Dig Deeper on Patient data access