Peter Pitts, Center for Medicine in the Public Interest 2

The role of an expert is to help to manage a situation which is similar in many ways but unexpected in its presentation and potential outcome. An expert shows value in helping to address uncertainty which can influence the eventual therapeutic outcome. 

In the good old days before the development of digital social networks, an expert was often solicited for her amount of knowledge on a specific topic. However, now that this knowledge is available at the click of a mouse, the value of an expert resides in experience that is unique and often not measurable or published.


As Duke University professor of consumer science Vincent Conitzer has opined, artificial intelligence “involves picking up on some statistical pattern that can be used to great effect, but it sometimes produces answers that lack common sense.”

A physician is supposed to provide the patient with the best possible therapeutic advice based on a common knowledge of a specific diagnosis. This approach can be easily improved or replaced by the tools of artificial intelligence but, according to the World Health Organization, health is not merely the absence of disease or infirmity. That broader definition of well-being is not easily linked to an algorithm; it is more a problem of relationship between patient and HCP. We are in the ethical domain which can’t be solved by AI – the role and value of “expertise.”

Using AI rashly is dangerous. Speed kills. Rather than improving the global system and the therapeutic outcomes of patients, the improper use of AI will only result in the decreased use of professional expertise. Per Malcolm Gladwell, “The key to good decision making is not knowledge. It is understanding. We are swimming in the former. We are desperately lacking in the latter.” 

Another issue is to assess the relation between natural intelligence (NI) from human brain and AI. We’ll use the acronym AI/NI for the combination of AI and NI.

The relationship between AI and NI problem can be summarized in two equations.

AI/NI is greater than or equal to AI+NI. In the optimal situation, AI frees the physician from time-consuming knowledge-based tasks with better efficiency, thus allowing her to spend more time on those areas where expertise (natural intelligence) is required. It’s good for patient and physician alike; it maximizes time, talent and opportunity.

AI/NI is less than AI+NI. At best, this represents a simple transfer of knowledge. At worst, it further devalues HCP expertise. If AI is only a transfer of knowledge, patient outcomes will suffer.

A good parallel is the growth of freestyle chess, a context in which Garry Kasparov concluded that “weak human + machine + better process was superior to a strong computer alone and, more remarkable, superior to a strong human + machine + inferior process.”

AI isn’t a healthcare magic bullet, because machines are terrible risk-takers and have no capacity to make leaps of faith. Humans, by comparison, are risk-takers because we have a sense of consciousness and intuition that machines don’t possess. In other words, we have expertise.

AI will have a huge impact on everything from genetics to genomics. It will help identify patterns in huge data sets of information and medical records, and look for mutations and linkages to disease. But for any of this to happen, we must view AI through the lens of 21st-century interoperability: specifically, the teamwork required between artificial intelligence and natural intelligence.

Former FDA Associate Commissioner Peter Pitts is president of the Center for Medicine in the Public Interest. Herve Le Louet is president of the Council for International Organizations of Medical Sciences.