Getty Images

Patients Leery of GenAI for Medical Misinformation Potential

Patients know GenAI will be part of the healthcare encounter someday, but they carry concerns that the tools could perpetuate medical misinformation or falsehoods.

Four in 10 adults acknowledge that generative AI is ready for the healthcare big leagues, but most still harbor pretty significant concerns about the accuracy and sourcing of the medical information that powers AI, according to survey data from Wolters Kluwer Health.

The survey of 1,000 adult healthcare consumers showed that patients need transparency about generative AI as it’s integrated into healthcare.

“As the healthcare community begins implementing GenAI applications, they must first understand and address the concerns Americans have about it being used in their care,” Greg Samios, president and CEO of Clinical Effectiveness at Wolters Kluwer Health, stated publicly. “It will take responsible testing as well as understanding the importance of using the most current, highly vetted content developed by real medical experts to build acceptance of this new technology in clinical settings.”

The use of GenAI, a tool that can produce materials based on the content it’s trained on, is an inevitability in healthcare, patient respondents indicated, and many think the tech is ready for the industry.

Around half (54 percent) said GenAI is ready to support annual screenings and exams, while 45 percent see a place for it in cancer screening, 43 percent see a role for GenAI in the diagnosis of diseases, and 40 percent see utility in pain management.

Fewer think GenAI is ready to support treatment decisions (36 percent), mental health support (31 percent), and surgery (25 percent).

But although they understand GenAI will likely play a role in healthcare, patients still have their concerns, mainly in terms of trust.

Overall, four in five respondents said they’d be concerned to learn that GenAI is being used in their healthcare, and 49 percent said they’re worried GenAI could produce false medical information.

Some 86 percent said a problem with GenAI is not knowing where the information upon which it’s been trained came from or how it was validated. Another 82 pointed out that it could be problematic to train GenAI on internet searches that have no filter or vetting.

There are some things that could improve consumer confidence in the technology. More than four in five respondents (86 percent) said they would want to know that the information upon which GenAI was trained came from a reliable medical source; 81 percent said they’d be more comfortable using a tool that was developed by a reputable or experience health IT company.

In fact, the number of respondents expressing concern about GenAI use in healthcare dropped to three in five when adding the caveat about a reputable health IT company.

Despite their concerns—44 percent expressed concern, and 27 percent are nervous about the use of GenAI—they are keeping a somewhat open mind. Around a fifth of respondents said they’re excited about the prospect of GenAI in healthcare, and a third are curious about the tool.

Others, again, noted that the use of GenAI in healthcare is perhaps inevitable. Around a fifth said they think it’ll be used in healthcare or patient-provider interactions within the next two years, and 34 percent said they think it’ll be in use within the next three to five. Very few respondents (5 percent) said they don’t think GenAI will be used at all.

Healthcare organizations and health IT vendors can work to make GenAI more acceptable to a patient and consumer population by ensuring there is some fidelity to the tools. Ensuring the data upon which GenAI is trained is medically accurate and potentially tapping a reputable health IT vendor for the technology will be key, Wolters Kluwer Health concluded.

Next Steps

Dig Deeper on Patient data access

xtelligent Virtual Healthcare
xtelligent Rev Cycle Management
xtelligent Healthtech Analytics
Close