While thousands and thousands of people have been working with artificial intelligence for years, the launch of ChatGPT last November brought the technology into the mainstream. Overnight, avoiding the hyperbole hit parade about these easy-to-use large language models became impossible.

First there was panic: The AI apocalypse will kill us all! At the very least, it will take our jobs! In the next breath, there was boundless optimism: It’s a better listener than human therapists! It will cure cancer!

Most of this, of course, is blather. “People are fond of talking about AI in gloriously vague, inaccurate and non-specific ways,” says Jason Carmel, global lead, creative data at Wunderman Thompson. “It’s like the word ‘medicine.’ Just as there are radical differences between Tylenol and chemotherapy, there are massive differences between the hundreds of artificial intelligences you can use.”

It’s also more important than usual to consider the source. Many marketing professionals have a vague sense of how AI works, akin to how a child might understand the general concept of sex. But when you press them for practical ideas or what it might mean for their companies in the short and long term, blank stares and embarrassed silences ensue.

We’re here to help. Here, a handful of AI experts — real ones, not ones who know just enough to fake their way through cocktail-party chatter — break down a host of AI myths and realities.


Myth: AI is a new (and astonishing) technological breakthrough.
Reality: AI has been revolutionizing health sciences — and every other business — for years.

“What people keep calling an AI revolution is more of a user interface revolution,” says Romain Bogaerts, director of AI product management at Real Chemistry. “Some of these models existed, but just a small group of engineers could use them.” Now they are so simple and intuitive that anyone can benefit: patients, copywriters, homework-avoidant grade schoolers and more.


Myth: Medical marketing companies should scramble to hire AI specialists.
Reality: Absolutely not, according to Eversana Intouch CEO Faruk Capan. Once a secure large language model (LLM) is in place, it’s important to encourage non-specialists to experiment with AI. Only then does innovation start to emerge from unlikely places.

“People need to play with it,” Capan stresses. “Our creative people are the most active with ideas about how to use it.”

It won’t be long, Carmel adds, that the idea of hiring a “prompt engineer” will sound as silly as “we should hire a specialist who knows how to use spell check” or “let’s recruit some top-notch Googlers.”

“It’s not a question of re-engineering your marketing teams,” Carmel continues. “It’s just making sure marketing teams are fully versed in the technology so they can take advantage of it.”


Myth:  AI will eventually replace clinicians.
Reality: Uh, no. “This will never happen,” says Eric Topol, M.D., a cardiologist, professor and author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again.

“Humans in the loop for all important medical matters/decisions will always be required,” he tells MM+M via email.


Myth:  AI isn’t reliable.
Reality: Neither are people. The important thing, experts believe, is keeping human eyes on what AI might get wrong. LLMs trained on words aren’t great with numbers. They often can’t replicate results. They occasionally make stuff up, a malfunction known as hallucination or confabulation.

As teams become more proficient in their deployment of AI, Bogaerts says, they will start to assess the cost of getting it wrong. Starting a patient on the wrong drug could be fatal, which is an error no one can afford. But generating a series of ineffective social media posts? Those costs are negligible.

As a result, businesses will quickly make cost-based trade-offs. “If customer service is 95% cheaper than humans, even if it’s not as good, companies will use it and accept some mistakes,” notes Mark Bard, cofounder and managing partner of DHC Group, formerly the Digital Health Coalition.


Myth: AI gets more trustworthy every day.
Reality:  Not quite. “How is ChatGPT’s behavior changing over time?” a study conducted by a group of researchers at Stanford University, evaluated the ability of LLMs to perform specific tasks, including taking the U.S. Medical Licensing Exam and identifying prime numbers, over three months. Some days, it would achieve 84% accuracy; others, just 51%. Much still depends on the user’s tolerance for risk.


Myth: Sure, there are ethical concerns associated with AI. But good people are figuring out that part as we go along.
Reality: Sorry, but putting faith in the collective good intentions of the tech industry is naïve, which is why so many health technologists remain skeptical. Doctors, for their part, have been urging healthcare organizations to establish firm ethical guidelines before authorizing providers to use AI.

“We’re also concerned about patients,” says Hanssen Li, M.D., a radiology researcher at Emory University and co-author of a study published earlier this year in The Lancet. AI outputs can provide misleading or false medical information to patients doing their own research, he noted: “It can sound so plausible and often more eloquent than doctors — and it speaks with real authority. But it can still be wrong.”


Myth: AI is smart enough to work bias out of its system.
Reality: Despite frequent claims to the contrary by interested parties, AI’s propensity to amplify bias remains one of its most significant limitations. “All AI does is trap bias in amber and codify it,” Carmel explains. “And if data isn’t cleansed of bias, it perpetuates the problem. Take, for example, the underreporting of melanoma in people of color. If we don’t fix that data now, bias will just get worse.”

Li is similarly concerned. “Medical knowledge is generated from large, western, high-income countries. Feeding that data creates an inherent bias. It could make people overlook other avenues of treatments that might be better for a patient who doesn’t live in the western part of the world.”


Myth:  AI will save us So. Much. Money.
Reality: Maybe? MM+M recently cited findings from the National Bureau of Economic Research, which estimated that AI could reduce annual healthcare costs by $360 billion. But such sums are still very much guesses — and potentially overinflated ones, Bogaerts believes. There might be savings, but there will also be costs.

“You’ll see cultural costs because people are involved and there are changes of process,” he explains. “Plus there are opportunity costs. If you focus on AI, you’re not focusing on something else.”

Don’t forget the initial expenses, as businesses figure out which LLM best meets their needs, and the ongoing costs associated with security and training systems on the most relevant data. How much should that technology cost? “The market will decide, but it won’t be free,” Capan says.


Myth: AI will enable medical breakthroughs.
Reality: Perhaps. But more importantly in the eyes of the medical community, it will handle much of the grunt work currently hamstringing providers. Much in the same manner that Google Scholar and PubMed freed doctors from hours of cranking through microfiche, AI can automate their most vexing administrative problems.

“Things such as prior authorization are completely broken and people in medicine know it,” Bard says. “Once there is some way to automate this back and forth, physicians will love it.”

Li, on the other hand, is hopeful about AI-driven scribe products. Amazon recently introduced AWS HealthScribe, a HIPAA-eligible service for healthcare software providers that uses speech recognition and generative AI to automatically create preliminary clinical documentation from patient-clinician conversations. Meanwhile, Google is reportedly piloting Med-PaLM 2, an AI program trained to answer medical questions, summarize documents and organize data, at the Mayo Clinic.

“Doctors often feel like they talk more to screens than they do to patients,” Li explains. “It will be great to have technology that lets radiologists like me ask, ‘When did this patient last take a drug that might interfere with the image?’” He’s also optimistic about AI’s ability to simplify research by powering through the world’s 30,000 medical journals to find relevant studies.


Myth: AI puts an infinite army of experts at every user’s disposal.
Reality: It’s more like an infinite number of college interns, some as dumb as a box of hammers.

Interns can do plenty of good work, Bogaerts says, but they need constant human oversight.

“None have enough experience to go on a client meeting or meet with providers or patients. And it’s the same with AI,” he adds.


Myth: AI remains terrible at generating images.
Reality: Yes, its issues with hands are well-documented. But AI’s image-creation capabilities seem to be improving on an almost hourly basis, and its potential in the realm of video is enormous.

“AI can generate a video of a bear playing poker so fast it makes your head hurt,” Bard proclaims. “It’s not just, ‘Is this a deep fake?’ It’s that it radically changes how we create what we perceive as video content. But what’s the application for pharma?”

AI video will likely shape the next election, with the Republican National Committee already using it for attack ads. Then there’s Under Armour, currently running a campaign centered around an AI-generated script. “At some point, we may have completely AI-generated commercials,” Bard adds.


Myth: Pharma companies are clamoring for AI-driven marketing.
Reality: Maybe, but don’t forget that pharma has historically been a slow adopter of new technologies. “I see RFPs daily and most don’t mention AI,” notes Wunderman Thompson Health VP, business development Ryan Bearbower. “It’s not because pharma companies can’t think creatively. It’s to ensure they won’t get sued.”


Myth: Pharma will be too timid to use AI successfully.
Reality: This might be true at first, but Alarcon thinks the industry’s biggest organizations won’t be able to resist the lure of de-identified data. “It’s true that, in pharma, we’re in a regulated and privacy-driven atmosphere. It will take pharma longer than people think to warm up to it,” she says.

But companies have been using de-identified data for years, which allows them to protect patient privacy and collaborate with many different data sets, driving exponentially richer results. As more people in and around the industry understand those benefits, the more comfortable they will likely become.


Myth: AI will cure cancer.
Reality: It might help, according to Wunderman Thompson Health chief medical officer Dania Alarcon. She points out that AI was brought up repeatedly at the most recent American Society of Clinical Oncologists annual meeting.

“Many people are looking at AI’s medical applications and drug development — how you can tailor therapy and predict response to treatment. If humans ask the right questions, can we adjust the drug to treat the patient in front of you?”


Myth: AI might go off the rails and destroy civilization. Elon Musk said so.
Reality: The worst-case scenarios are severe enough to invoke sci-fi paranoia in us all. Indeed, automation-running-amok has long been nightmare fodder. “It’s like War Games, and people are right to be afraid of some unchecked artificial intelligence that doesn’t have a human ‘off’ switch,” Bard says.

But relatively few technologists believe that AI will be able to outsmart the humans who invented it. To that point, they stress the importance of keeping the potential bad actors who use it on their toes. That means implementing systems to double-check AI’s accuracy and assumptions, making sure it guards employers’ intellectual property and doesn’t pilfer other people’s ideas or patents.

It might be a cliché, but Capan believes it contains plenty of wisdom: “AI isn’t going to take your job. But a human who knows how to use it will.”