Getty Images/iStockphoto


Pros and cons of conversational AI in healthcare

Conversational AI platforms have well-documented drawbacks, but if they are regulated and used correctly, they can benefit industries such as healthcare.

The abilities of large language model applications such as ChatGPT continue to make headlines. Healthcare is one area that could successfully harness that functionality.

Everything from term-paper writing to the creation of legal briefs can benefit from AI chatbot applications. Unfortunately, these chatbots are not quite ready for many of the tasks they are given. Term papers ChatGPT writes can get failing grades for poor construction, reasoning and writing. Legal briefs have been rejected due to fabricated precedents and citations.

However, the potential of AI chatbots is clear, and the pace of development means that despite challenges, such tools will assist many industries -- especially those that involve conversations and sifting through conversations for vital information. Conversational AI in healthcare is no exception.

Benefits of conversational AI in healthcare

Healthcare is highly digitized already. Doctor's offices and hospitals have specialized instruments, including digital thermometers, MRI machines and hematology analyzers. However, instruments and their readings never tell the whole story. They can provide mass quantities of data, but there is also data that comes only from conversations with patients and caregivers. That's where conversational AI can offer benefits in healthcare, making it preferable to traditional engagements with patients. Here are a few possibilities:

Inquiring about and examining medical histories. Conversational AI can, or will soon be, trained to get medical histories from patients and ask them about symptoms and concerns to record, transcribe and summarize the results for doctors to read.

Creating transcripts of conversations. AI chatbot technologies will also be able to supplement other technologies such as electronic medical records in other verbally intensive medical situations, such as creating transcripts during an examination or procedure. In keeping these records, the technology can help properly time patient visits as well as the handing off of patients from one doctor or nurse to another at the end of shifts.

Automating repetitive tasks. While there is a whole different branch of AI that can help doctors provide diagnoses and identify treatment options, conversational AI shows promise in the area of automation as well. These tools may be able to handle much of the rote process where doctors, nurses and even pharmacists must give instructions to a patient, for instance. They would be able to not just repeat but rephrase instructions if needed for patients without running out of patience the way a human might.

Challenges of AI in healthcare

The challenges of using conversational AI tools in healthcare are significant and must be addressed before widespread use is acceptable.

Medically capable AI chatbots will likely be expensive to acquire and maintain.

Fabulation. ChatGPT and other large language models are capable of producing blatantly untrue answers and outputs. More dangerously in medical contexts, they are also able to spit out subtly untrue things. If a tool claims a patient was not allergic to penicillin, when the opposite is true, that could be deadly.

Cost. In many cases, conversational AI tools and the resources needed to operate them, such as data centers, can be cost prohibitive. Medically capable AI chatbots will likely be expensive to acquire and maintain.

Privacy. Malicious actors can hack into conversational AI tools and divulge patients' private data or personally identifiable information. This data includes both patients' answers to an AI tool's questions and questions that patients ask the AI tool. For example, if a patient asks an office AI chatbot to go over an aspect of their health records, that leaves their records open to an extraction hack, putting the hospital or pharmacy at risk of a lawsuit or fine.

Risk and uncertainty. There is currently no legal or regulatory framework that would justify AI tools taking on significant, autonomous roles in healthcare. There is also a lack of standard insurance mechanisms for mitigating the institutional risks that such systems may pose to the companies using them.

The healthcare industry should expect conversational AI to play an increased role in healthcare in the future, but there must be regulations and governance policies that help address some of the challenges.

Next Steps

Conversational AI vs. generative AI: What's the difference?

Dig Deeper on Enterprise applications of AI

Business Analytics
Data Management