The risk/benefit fallacy
When the members of the FDA advisory committee voted 22 to 1 to keep Avandia on the market, they had concluded that the epidemiologic data showed that the benefits of the drug outweighed the risks.
Such data analysis is no fallacy. The fallacy is to expect doctors to find risk/benefit ratios meaningful in making clinical decisions.
It happened that the day the Avandia decision was announced, my wife and I had lunch with a friend who is paraplegic. She had undergone, she explained, elective spinal surgery and something went wrong. “My spine is fine,” she said ruefully, “but I can't get out of this wheelchair. If only I had known the risk.”
But how could she have known? The incidence of paralysis in disk surgery is so minor that the statistics are unlikely to have deterred her. That's what makes the risk/benefit formula meaningless: if you have a bad reaction, the incidence is 100%. That's why those who get hurt are so unforgiving. If you took Vioxx and had a heart attack, it had to be Merck's fault. If, as most juries have agreed, Merck was not to blame, then what about the FDA, for having approved Vioxx? Never mind the laws of probability; there's got to be somebody I can sue! That's what makes risk/benefit assessment the industry's Achilles' heel.
Here is another vulnerable heel: the misperceptions about the term “side effects.” It suggests dry mouth and dizziness…the trivia summarized in package inserts, not heart attacks. What the public fails to understand— and what most companies are reluctant to spell out in their DTC promotion—is that the problems that make headlines are usually not side effects but concomitant actions. Few potent drugs impact only a single organ or system, and attempts to tease them apart (as the COX-2 experience proved) are at best only partially successful.
Which is why when you take aspirin to protect your heart, you are putting your guts at risk. If you then take a PPI to protect the stomach, that increases your chances of breaking a hip. What about adding Fosamax? (See package insert.)
So can risk assessment be made more actionable? Cohen and Neuman in their paper* on the “Shifting Benefit-Risk Landscape” make an attempt to introduce objective data by comparing the risks of medications with those we accept routinely. Daily aspirin, they say, is as risky as driving a car every day. Most of us do drive, so does that mean the risk of aspirin is acceptable? Not according to my gastroenterologist, though I have to drive to see him, but my internist disagrees.
The authors also point out that when Tysabri was taken off the market, half the patients said they would accept a risk of one in a thousand to be able to continue. When it once again became available, however, only 25% resumed taking it, though the actual risk is only 65 per 100,000.
The only conclusion that seems incontrovertible is that marketing prescription drugs is a high-risk venture. And that nobody knows the answers. No, that's not quite right. Some people do seem to know: the members of Congress, who confidently voted for a “risk evaluation and mitigation strategy” for all new products. Let's wish them (and the rest of us) the best of luck.
*Health Affairs, May/June 2007.
Warren Ross is MM&M's editor at large