Chatbots were once seen as the future of customer interaction. Those who bought into the earliest and most bullish prognostication expected that, by late 2022, chatbots would handle a wide range of customer-facing functions.

Forget about simple administrative tasks, such as directing users to an appropriate nearby physician who accepts their insurance. Chatbots were going to provide actual diagnoses based off information entered into a little box on the bottom-right-hand corner of the user’s screen.

To say that healthcare chatbots haven’t yet realized that potential is an understatement. Think about all the times you’ve shared a specific set of symptoms with some or other medical-minded chatbot and the wide range of responses it has spit back at you. Your thudding headache, it appears, could be indicative of anything from simple dehydration to terminal brain cancer.

“We have a cycle of inflated expectations,” says Yan Fossat, VP of applied sciences, Klick Labs. “We think that this will change everything, and then the reality is never actually as good as the tech demo.”

Fossat is a believer in chatbots; he envisions a pivotal role for them in any number of health-related engagements. At the same time, he is realistic about what they can and can’t do — both now and, potentially, years into the future.

The issue, to put it simply: The available technology isn’t there yet, and that’s before one considers the unkempt mess of data fueling and informing even the most functional chatbots.

Source: Getty Images.

Combine this with high consumer expectations created by chatbot interactions in other verticals, and you’ve got a situation in which individuals seeking help from health-and-wellness chatbots come away disappointed — or, in the worst-case scenario, untreated.

Chatbots generally have two modes of operation. There are pre-programmed question-and-answer formats and there are more involved ones that utilize natural language processing. Both options present their own challenges.

For the Q&As, the organization deploying the chatbot can have total control over responses, but they might not satisfy the needs of the user. It’s a robotic back-and-forth that de-personalizes interactions and often generates more questions than helpful or “correct” answers.

As for natural language processing, experts warn that programs employing the technology need to be well-calibrated, lest they dispense misleading information or employ inappropriate language. In either instance, concerns over liability have health-adjacent organizations treading lightly.

Despite these issues, the potential for effective and efficient use of chatbots continues to tantalize stakeholders across the health-and-wellness spectrum. For users, 24/7 support and quick responses to common questions clearly beat waiting on hold for hours on end. For providers, anything that eases the burden of administrative tasks or streamlines work flows is received with open arms.

The perception of chatbots as unresponsive or just plain incompetent can be traced to the heightened expectations, absolutely, but don’t discount the inherent issues with language. For instance, the cadence in which humans exchange information with each other is natural. The cadence of interactions with AI-infused chatbots, on the other hand, is not. The lack of intuitiveness usually becomes evident within a few back-and-forth responses.

“We can comprehend intelligence without language and intelligence with language, but language without intelligence — which is what chatbots have — is really weird,” explains Fossat.

Indeed, a lack of significant language-backed intelligence, especially when paired with the absence of empathy and conversational tone that one expects when talking about personal matters, can lead to a subpar experience. That’s why Ada Health chief client officer Vanessa Lemarié, whose company operates one of the few well-regarded health-specific chatbots, believes chatbots won’t truly thrive until they find a way to more naturally incorporate language and cultural context.

“We take great care in localizing Ada, not only in language but also in context and cultural comprehension, so that we’re able to leverage the information that is being shared in a meaningful manner and give users their best possible insight into their health,” she explains.

At the same time, a great part of chatbots’ appeal is their lack of bias in all human interactions. For that reason alone, people may be more likely to seek out a chatbot to “discuss” private matters than they would a healthcare professional.

Ultimately, it might boil down to communication: Chatbot disciples need to make sure the people using them understand that the medical information they convey is less a definitive diagnosis than a broad guideline.

“A chatbot can only speak from the present — unless, of course, it’s programmed to go into an electronic record and pull out all of the prior visits and the diagnosis and prescription medicine,” notes Natalie Schibell, VP, research director in Forrester’s healthcare vertical. “But we are a long way off from that right now.”

Without the crucial background info that a chatbot may not know to ask for, diagnoses could veer in extreme directions. Even the most educated users might take a chatbot diagnosis as fact, rather than as a directional guide toward a visit with the right healthcare practitioner.

Nobody is downplaying accuracy, of course. The issue? Given that the databases upon which chatbots feed are continuously being updated — at least in theory — accuracy isn’t a fixed target. The industry, to its credit, hasn’t sought to downplay this in favor of plaudits about convenience.

“It’s tremendously important that chatbots are indeed medically accurate and helpful at the same time,” Lemarié says.

Which isn’t to say that the convenience component of broader chatbot adoption should be deemphasized. Since February 2020, one in five healthcare professionals have quit their jobs in the U.S. As one might expect, this scarcity of workers has led to longer wait times for medical appointments. As of early August, people were waiting an average of 20 days for non-specialist appointments and an average of 40 days for specialist ones.

Thus there is a powerful argument to be made that the broader healthcare business needs chatbots.

“The shortage of healthcare workers will only increase and the system is overloaded,” Lemarié says. “Health assessment platforms and chatbot interfaces on those platforms have an important role to play to help people navigate to next best care when they are sick, but also to keep people from consuming healthcare resources when it’s not necessary in the first place.”

And then there’s the aforementioned use of chatbots as a directional tool. Even if bots evolve to the point where they can diagnose a condition with 100% accuracy, there’s no guarantee that the person using it would understand the results or know how to act on them.

It’s unwise to assume even the most basic level of healthcare literacy and sound judgment among users. Plus other risk factors come into play when imminent or self-harm may be involved.

“If the patient is conversing and saying they might be at risk for self-harm, we need to make sure the chatbot is capable of understanding that with very, very high accuracy and not missing it,” Fossat stresses. “You have to make sure that you’re monitoring the patient’s safety and that requires intelligence on the part of the machine that is not quite there yet. It’s almost there, but not quite.”

Source: Getty Images.

Its depiction in movies and on TV notwithstanding, AI remains far from a perfect science. No matter how many times its limitations are accurately detailed (once more, with feeling: AI is predictive in nature and only as good as the data that informs it), expectations remain out of whack.

Unfortunately, that colors the chatbot user experience. The spread of misinformation and the politicization of COVID-19 have already created systemic mistrust and confusion. Given the other risks associated with technology, especially data privacy and security, chatbot advocates now have to add overcoming skepticism to their to-do lists.

On the plus side, the industry is quite aware of such challenges and has endeavored to get past them. “The bot has to know how to collect data appropriately and safely. So if someone is texting some sensitive material, it needs to know to safeguard that data,” Schibell explains. “Training the chatbot properly is paramount to avoid data breaches.”

That’s obviously a big piece of the way forward, as is codifying a distinct set of terminology around chatbots and the technology that empowers them. This might even involve trading in the moniker ‘chatbot’ for a more descriptive one. After all, the set-in-stone connotation of “chatbot” could itself negatively influence healthcare providers from utilizing a conversational AI platform as well as patients from wanting to engage with the system.

“If a chatbot is a true conversational agent, they’ve got to say as much. They’ve got to start using that terminology and start making that distinction,” says ConversationHealth president John Seaner. “The term ‘chatbot’ has caught on, but it is a term for something that certainly isn’t as sophisticated as an NLP-based conversational agent. The industry has got to stop using and intermixing these terms.”

Terminology aside, supporters need to make a stronger case for the benefits of chatbot technology — that when it is designed and utilized smartly, it can revitalize user experience.

“The bad rap comes because nobody thinks about conversational experience,” Seaner notes. “In healthcare and life sciences, conversational experience is the most important thing. These are patients; they need empathy. You cannot treat them like you’re trying to figure out what their cable bill is.”

Until chatbots truly take hold, then, it might make sense to limit their use to what all parties agree they do particularly well: simple, mundane tasks that aid healthcare professionals. Scheduling appointments, collecting background

information about the patient’s need for an appointment, dealing with the paperwork — these are the types of uses that would free up office staff for more patient-facing activities.

“‘What are you here for?’ can be captured and reacted to by AI,” says Wunderman Thompson Health global CEO Patrick Wisnom. “So many of the things that doctors do in the earliest stages could become the realm of AI … I’m sure chatbots will become a much bigger part of the healthcare experience from a professional and patient perspective.”

Other potential areas of improvement include optimizing voice chatbots to better understand medical jargon — which, it should be noted, is no easy task. As it currently stands, voice chatbots far too often trip over medical terms, a situation that can be exacerbated by the different accents or intonations of the individuals reciting them.

Don’t buy it? Consider the number of ways people pronounce words such as “nuptial” or “affidavit,” and then imagine the same issues rearing their ugly head for polysyllabic medical terminology. Programming bots to better understand medical language should not only improve overall functionality, but also assist people with disabilities or anyone who struggles with completing the mess of paperwork every doctor’s visit entails.

As for enhancements to the inflexible and largely empathy-free way in which healthcare chatbots interact with users, a greater degree of human-centricity is clearly needed. Especially in the realm of healthcare, chatbots need to be able to communicate with users in their native language, using terms that are easy for the average person with limited healthcare literacy to understand.

Nobody believes that healthcare chatbots aren’t here to stay; few people doubt their utility at a time of great systemic stress. So now it’s up to industry leaders to configure them in a manner that inspires trust and confidence.

Are those leaders up to the task? Wisnom is cautiously bullish.

“The future is limitless, but the first steps into it will be areas that can be easily controlled or still involve heavy elements of human intervention,” he says.