ChatGPT, the new artificial intelligence tool, is getting the cold shoulder from educators and the coding community. But authors from McKinsey and Harvard have a more enthusiastic take on the potential of virtual assistants in healthcare. 

In a new report, they estimate that various types of AI, from machine learning (ML) to natural language processing (NLP), could save the healthcare system between $200 billion to $360 billion. It could also improve patient experience and clinician satisfaction while broadening access.

ChatGPT has quickly established a name in healthcare, having passed the U.S. medical licensing exam and notched authorship credits on multiple scientific papers. Universities, researchers and publishers may be debating the place of such AI-based tools. 

But healthcare experts aren’t grappling with the tech so much as salivating over how much it promises to boost productivity among payers, physicians and hospitals. One big chunk of savings could come from lowering administrative costs, estimated to account for a quarter of all health spend. 

AI is well-suited to tackle such tasks, due to their manual and repetitive nature. Private payers, for instance, could leverage AI to improve claims-management processes, like auto-adjudication or prior authorization.

Meanwhile, ML can predict avoidable readmissions, after which care managers can reach out and intervene ahead of time. Per the report, when such a model was used, 70% more of a plan’s members connected with care managers than before. Similarly, follow-up visits rose 40% and all-cause readmissions fell 55%. 

On the flip side of the payer equation, some doctors are enlisting AI to appeal insurance claims. Dr. Cliff Stermer, a Palm Beach, Florida-based rheumatologist, went semi-viral on TikTok in December for sharing a time-saving hack: Using ChatGPT to appeal an insurance company’s denial of coverage for a test he had ordered.

One of Stermer’s patients, who suffers from a rare autoimmune condition called systemic sclerosis, needed an echocardiogram to assess cardiac function. In seconds, the chatbot churned out a persuasive letter appealing the insurer’s refusal of coverage, complete with explanatory references. 

”Amazing stuff. Use this in your daily practice. It will save time and effort. We’re loving it here,” said Stermer in the video.

A second healthcare target for improving productivity revolves around clinical operations. Take the hospital operating room, a critical resource which isn’t always run optimally. Hours are wasted due to poor scheduling techniques, inaccurate estimates of surgical times and tedious processes for freeing up and reassigning unused blocks of time. 

One large regional hospital, per a case study cited in the report, had been losing surgical volumes to other hospitals. Using an AI algorithm, the hospital was able to optimize its OR block scheduler — the system for scheduling open time slots to surgeons — resulting in a 30% expansion in open OR time.

AI is also being adopted for value-based care (VBC) arrangements, where quality and safety outcomes can impact financial performance for physician groups. The report explains how one such group, which was in a VBC arrangement to reduce total cost of care for a chronic disease, aggregated data from multiple sources with the help of AI. 

That resulted in a more refined care model, which showed potential for cutting down on unplanned admissions. The pilot is  currently being rolled out more broadly.

While payers and physician groups are in the scaling-and-adapting phase of using AI within claims management and VBC, the authors point out that not all areas are quite as mature. Hospitals are still piloting the tech in clinical operations. 

Use of AI by physicians for knowledge-based tasks like supporting clinical decision-making and recommending treatment remains in the development stage. By way of example, does anyone remember IBM Watson’s stumbles in healthcare? Or the use of chatbots as virtual patient assistants.

Indeed, the tech will need to overcome major adoption challenges. Those include legacy technology, siloed data and nascent operating models. There are misaligned incentives, industry fragmentation and a need for more talent skilled in data science, per the report.

But based on the AI-driven use cases, the report notes, private payers could save roughly 7% to 9% of their total costs (amounting to $80 billion to $110 billion) within the next five years using the AI tech now available. Among physician groups, the report estimates savings at 3% to 8% of costs ($20 billion to $60 billion), whereas hospitals that tap into data science could net 4% to 11% of costs ($60 billion to $120 billion).

That translates into savings of $200 billion to $360 billion in healthcare, or 5% to 10% of 2019 spending, without sacrificing quality and access. And that’s without sacrificing quality or access. The authors of the report call for ongoing research over the next few years to validate the tech, including randomized controlled trials to prove its impact in clinical areas. 

Some fields may be scrambling to bar their use, but AI applications have already proven their worth in financial services and retail. For the U.S. healthcare system, notorious for being the most expensive in the world while delivering the poorest outcomes, “AI is likely to be part of the solution,” the authors conclude.


Ethics and the AI arms race

To what extent is bias holding back medicine from integrating AI?
By Marc Iskowitz

Artificial intelligence is advancing very rapidly in areas such as banking, but the technology isn’t being integrated as quickly in medicine. One of the main hurdles involves ethical concerns — specifically, the fear that applying algorithms to data in various contexts could exacerbate biases.

Avoiding bias requires large, balanced data sets. Skewed data is the norm in health AI research, however, due to historic and persistent inequalities. One 2019 study, for instance, showed that a triaging system prioritized white patients ahead of Black patients.

The internet has been having a field day with AI-based app ChatGPT since it was let loose last November, and Google is said to be testing its own generative AI chatbot, Bard. Yet while many observers have labeled this an “AI arms race,” there is far more at stake for patients, physicians, hospitals and payers.

MM+M tapped Camille Nebeker, an associate professor and research ethicist in the family medicine and public health department at the Herbert Wertheim School of Public Health and Human Longevity Science, and head of ReCODE Health, to explore how some of these concerns complicate the path to AI adoption in healthcare — and what stakeholders would be wise to keep in mind as they navigate the new technology. 

MM+M: One of the ethical problems in AI is the potential for algorithms to produce results that are prejudiced based on race, gender or physical traits. Which forms of bias are most worrisome?

Nebeker: When models are developed using data that are not inclusive of diverse populations, the output may not be generalizable. The algorithm could suggest a course of action that is not appropriate and, subsequently, result in harm rather than a benefit.

MM+M: Is it possible to prevent algorithmic bias, or at least contain it?

Nebeker: It is not possible to prevent bias but it is possible to recognize limitations within a particular data set and mitigate downstream harms.

MM+M: As a research ethicist, how do you ensure that studies testing AI applications include an adequate quantity or quality of data?

Nebeker: I’m an investigator on the NIH-funded Bridge to AI program. By involving multidisciplinary experts at the onset, including research and bioethicists, we have a better chance of proactively identifying and mitigating downstream harms.

MM+M: As we’ve seen with ChatGPT, there’s also the danger that the AI spews out answers that sound plausible but are made up of nonsense and/or are wrong, which can have dire consequences. What’s your view of these recent developments?

Nebeker: There are certainly possible benefits of using ChatGPT that cut across sectors (e.g., education, transportation, healthcare). That said, it is important to recognize that ChatGPT, at this early stage, could exacerbate risks of harm to individuals and groups depending on how it is used and with whom. Research is needed to guide the responsible and safe use of this technology.

MM+M: Some doctors are now encouraging clinicians to use ChatGPT for appealing insurance denials. Do you see any ethical issues with that?

Nebeker: Again, research is needed to assess probability and magnitude of possible harms as well as benefits to the various stakeholders involved … insurance companies, clinicians, patients, caregivers, etc.

MM+M: One agency told me it’s using it to generate and optimize promotional messages for doctors (e.g., create a promotional message for the brand).

Nebeker: If the messaging is inaccurate or leads a clinician to make an inappropriate choice about a patient’s treatment, that would be a problem.