Pfizer recently announced a major multinational study involving some 20,000 patients to compare the safety of its COX-2 inhibitor Celebrex with over-the-counter painkillers, ibuprofen and naproxen. “Vioxx scared our industry to death,” says Rick Slattery of Clinical Marketing Consortium (see “Expert Contributors,” right), so it’s little wonder Pfizer is spending $75-$100 million to demonstrate the safety of Celebrex and defend its $3 billion franchise. Although Pfizer is footing the bill, the Cleveland Clinic will take charge of what Slattery describes as probably the largest comparative trial ever in the field of pain and osteoarthritis.

In recent months, head-to-head clinical comparisons like this have received a good deal of media attention, not all of it cheerful.

The Journal of American Medical Association, in an editorial in March this year, challenged the results of AstraZeneca’s ASTEROID study of Crestor. To evaluate its effectiveness in atherosclerosis fairly, the authors said, the company should conduct a comparison of Crestor with Merck’s Zocor.

But spending the time and money for comparative trials is no protection against criticism. An article in the American Journal of Psychiatry earlier this year examined 33 pharma-sponsored head-to-head studies of antipsychotics an because the outcome favored the sponsors’ drugs 90 percent of the tim charged that “this effect may not be totally unrelated to the funding sources of the trials.”

Worse, the Wall Street Journal ran a front-page exposé headlined: “Fraud, errors taint key study.” The story goes on to report that a study comparing antibiotic Ketek with Augmentin suppressed reports of liver damage and included faked data from an investigator who went to jail.

And with great fanfare, the National Cancer Institute held a news conference to announce that in a head-to-head comparison of breast cancer therapies, raloxifene (AstraZeneca’s Nolvadex) had proved to be just as effective as tamoxifen (Eli Lilly’s Evista) but with fewer side effects.
Of course,  when you do get positive results, you can flaunt them. Witness the Pfizer ad for its migraine treatment Relpax, headlined: “Head-to-head versus Imitrex, more patients come out ahead with Relpax.”

As Grey Worldwide EVP Bob Burruss has pointed out, FDA is subjecting superiority claims to increasing scrutiny, and it’s impossible to document such claims without credible clinical data.
Another source of pressure to do head-to-head studies is the fallout as Medicare Part D gets rolling and cost effectiveness may determine coverage. Wait a minute, you might think, doesn’t the law specifically forbid CMS (the Centers for Medicare & Medicaid Services) from using price considerations? Ah, but there’s also a provision in the Medicare Prescription Drug, Improvement, and Modernization Act that directs another HHS branch, the Agency for Healthcare Research and Quality (AHRQ), “to begin reporting on the comparative effectiveness of different ways to treat various conditions.” To close the link, CMS has contracted with AHRQ to do comparative studies.

Though CMS is precluded from using these findings to calculate cost effectiveness, there is nothing to keep the drug plans it contracts with from doing so. Naïve question: doesn’t that mean that anyone with a calculator can work out the cost/benefit ratio? When this question was addressed to a spokesperson at CMS, it evoked a guarded, “We’re not allowed to do that.” But what about your contractors? “That’s up to them.” (Did I hear a chuckle?) Keep in mind that the contractors will make between 70 and 90 percent of coverage decisions, CMS only the remaining few.

The Medicare movement
In fact, the Part D plans won’t even need a calculator: AHRQ is about to publish a reference guide that will include assessments of cost effectiveness based on head-to-head comparisons as well as other data. These are the facts that led Slattery to say flatly: “In order to get product and pricing approval under Part D, comparative studies will be required.”

At the beginning of May, AHRQ had announced the results of two studies which are likely to please pharmaceutical manufacturers but not device makers. One concluded that medical management proved as effective as revascularization and stenting in renal artery stenosis; the other that certain drugs were found as effective as surgery for the management of GERD.

And here’s the clincher as to what all this means, and never mind the provision the industry fought so hard to have included in the law. A CMS guidance document says: “As the pace of the introduction of a broad range of diagnostic tools and therapeutic interventions quickens, the demand for better information about their effectiveness… becomes even more urgent. Better evidence will help doctors and patients get the most benefit at the lowest cost in our increasingly complex and individualized health care system.”

Dr. Cohen feels that it would make good business sense for pharmaceutical firms to “steal a march on the government” and not wait for such cost-effectiveness comparisons to be thrust upon them. If they don’t, he sees the possibility that “an angry Congress, bombarded with letters from constituents complaining about the rising cost of pharmaceuticals” will impose far more stringent legislative requirements.

Stealing a march on the government
Some pharma and biotech companies are already heeding such advice. Amgen, for example, has announced several head-to-head trials: one compares its investigational multikinase inhibitor with Genentech’s Avastin and Bristol-Myers Squibb’s Taxol; another pits its osteoporosis agent denosumab against Merck’s Fosamax and Novartis’ Zometa; a third compares a pipeline breast cancer agent with Taxol and Avastin. (Dr. Still explains that three-part studies are often referred to as “head-to-head” even though they may also include a placebo arm.)

Dr. Stephenson points to another reason for doing comparative trials: to satisfy the FDA. Some are commitment studies called for at the time of product approval; others are done to support requests for label changes.

But, of course, government pressure is not the only motive. Many studies are designed to meet market needs. Take hypertension as an example, he says. Maybe 15 years ago it sufficed to prove that your drug could lower blood pressure. Now, since physicians have numerous effective products available, they will ask: “Why is your new product better than what I’m using? What characteristics should I look for to identify patients who would benefit from being switched?” To say that it’s better than placebo is not much of an answer.

Obviously there’s a risk involved in paying for such data, as Pfizer found out when a blood-pressure study it sponsored backfired and the generic chlorthalidone proved to have advantages over its antihypertensive brand Norvasc. As Slattery points out, the decision to run a comparative trial depends on the courage and conviction of the manufacturer.

Also influencing the decision is the type of marketplace. “If it’s a big category, like hypertension or infection, where the differences between products may be miniscule,” says Slattery, “the risk may be worth taking. And third, they have to be convinced that they’ll get results that they’ll be able to promote.”

Apparently, a lot of companies believe the risk is worth taking since they have listed more than 200 comparative studies on the government’s registry of clinical studies (ClinicalTrials.gov)
Another factor that is leading companies to undertake comparison trials is the desire to market in Europe. Though it has lately made exceptions for certain biologics, the European Medicines Agency used to require them for all product approvals and still favors them in most instances.
Finally, because a growing proportion of new products in the pipeline are for various cancers and other serious chronic diseases, the kind research where the old gold standard of the double-blind placebo study would simply be unethical.

Dr. Moser, by way of example, points out that you cannot risk putting patients with heart disease or hypertension on a placebo, nor can you leave patients with HIV infection without treatment. Placebo studies may still be acceptable in studying short-term infections with measurable end points, he says, but Dr. Still estimates that of 45 or more infectious disease trials his company has run in the last four years, the majority were comparative.

So far, FDA has not ruled out placebo-controlled data, but it has published guidance documents that cover what it calls “active (positive) concurrent control” and “multiple control groups,” and it may increasingly require such studies when NDA protocols are first submitted.

Dr. Cohen, for one, thinks it would make sense for the agency to follow the European model. Since the studies that led to the approval of drugs already on the market provide baseline data on effectiveness, he believes that a head-to-head comparison of any new drug for the same indication would provide better data, though he considers three-way trials best of all and he would like to see them “become the gold standard for evaluating new drugs.”

Robert Schiff not only agrees that placebo studies are unacceptable in conditions where failure to treat could have devastating or long-term consequences, but also advocates three-way studies. There are therapeutic areas, he points out, where inclusion of a placebo arm is essential, citing as examples the testing psychotropics, as well as other conditions — erectile dysfunction for one — where psychogenic effects can skew the results.

AHRQ, for its part, accepts this logic at least in specific instances, stating in a draft document that “large, long-term trials with active and placebo-controlled arms would be needed to assess the safety and benefits of any COX-2 selective analgesic.” But AHRQ studies also have their critics. The Biotechnology Industry Organization, warning that the agency’s data could be used to deny Medicare coverage, stresses that targeted therapies could help some patients but not others. Therefore BIO wants coverage decisions to be flexible enough to provide for individualized care. Orphan drugs, the organization fears, may be particularly vulnerable to blanket rulings, since “many cost-effectiveness and cost-comparison models will break down” when applied to them.

Large vs. small
In a final wrinkle, Dr. Schiff makes the point that while large companies see the need for comparative data, smaller enterprises often prefer placebo studies. “These companies are anxious to get into the market,” he explains, and placebo-controlled trials are not only faster and less costly but, in his opinion, more likely to show proof of efficacy acceptable to FDA. Of course, when a drug is the first in its class there’s little choice.

Michael Rosenberg, who is a physician as well as the head of a CRO, sees reasons why a company would want to know how its products stack up against its competitors from both the medical and the marketing perspective. “As both a researcher and a clinician, it is difficult to interpret non-comparative studies since each tends to be done differently and even subtle factors in design can markedly influence the outcome,”  he says, “Not to mention that there may be spin in how the results are presented.”

As for the marketing point of view, the advantage of having such data applies not only to pre-launch planning but also to post-marketing strategies. “Comparative testing can provide powerful marketing tools not just in documenting competitive advantages, but in refining populations who stand to gain the greatest benefit,” he adds. They can also protect against unpleasant surprises by identifying those in whom it should be avoided. “Some drugs may work better for some subgroups than others,” Rosenberg believes, so that even if a comparative trial turns out not to show overall superiority, it may still produce highly valuable information.

Sharing such a dual perspective is Dr. Camardo, physician and senior corporate executive. The reason why there are so many more comparative studies than there were 15 years ago, he says, is that clinicians, regulatory agencies, benefit managers, and CMS are all looking for ways to assess how drugs stack up against others in their class. Also, scrutiny of industry actions is getting more intense, creating “a major need to demonstrate clear clinical benefits.” He further emphasizes that it is basic human nature to want to make comparisons. All of us do it when we buy a car, he points out; doctors do it when they prescribe.

Controversies
Nothing the healthcare industry does is ever without friction, so it comes as no surprise that there are dissenting voices when it comes to the credibility of industry-sponsored clinical trials.
“The trouble with comparative studies is that often they’re sponsored by industry, out to prove their drug is superior,” Dr. Moser says bluntly. Hence protocols may be designed to skew the outcome, as when Pfizer compared Norvasc with a beta blocker, instead of a diuretic. On the other hand, when the ALLHAT trial of several agents as initial antihypertensive therapy concluded that chlorthalidone, a generic diuretic, was shown to be equal or superior in some subsets of patients to all the patented agents it was compared with, “industry didn’t like the outcome, so they called it a flawed study.” In an editorial in the Journal of Clinical Hypertension, Moser set out to demonstrate that, in fact, ALLHAT was the kind of study that should be done—multiple drugs, double-blind, involving more than 33,000 relatively high-risk patients, run over a five-year period… and not funded by the industry.

Dr. Moser also questions why industry-sponsored studies always show the new product to be better than the old. “It’s because some of the negative studies are buried,” he maintains. The bottom line, he adds, is that “protocols have to be looked at a little more carefully by independent people, not just by advisory panels put together by industry. I’m very cynical about a lot of this,” he sums up.

Independent initiative
Why not, Dr. Cohen asks, disarm such critics by going to an external group to conduct comparative studies? “That way, there could be no accusations that the sponsor has influenced the outcome.” Such a hands-off approach, he feels, “could be extraordinarily valuable for the marketing strategy of the company.” But who, apart from industry, would foot the bill? Dr. Still observes that not even NIH has enough money to fund all the necessary studies, leading Dr. Cohen to suggest that there might be a PDUFA-like fee paid as part of new drug applications that could be used to fund well designed, clearly objective comparative studies, thus removing the suspicion of sponsor bias.

Dr. Camardo agrees that there are many questions about comparative studies and that there is a need to address methodologic issues. For instance, in picking a comparator, do you choose the market leader or a competitor where you have reason to believe you can demonstrate an advantage?

Dr. Still observes that obvious rigging of the outcome is unlikely. After all, he says, if it’s done in Phase III, the data may not be useful for registration purposes. And if it’s done in a post-marketing trial, it is apt to meet with considerable criticism from the scientific community. While he’s not aware of such manipulations having occurred in any organization he’s been associated with, he does admit that “you can certainly… find studies that appear very suspicious.” The problem as he sees it is that “the average clinician is probably not very well equipped” to distinguish between good studies and bad.

Dr. Stephenson sees another self-correcting factor. Though admitting that he can see where skeptical concerns come from, he adds that “at the end of the day, the value and power of any study from a marketing perspective is only as good as how close it is to addressing an existing medical question.”

Others have pointed out that the suppression of negative outcomes will not be possible—or at least not easy—once a study has been posted on the new register for all to see.

Based on our contributors’ collective assessment of the reasons to do more comparative studies—even acknowledging the risk that, as Dr. Still says, “not every study is going to result in the desired outcome” —it’s reasonable to assume that the trend to sponsor them will continue to grow. There are even some reasons we haven’t mentioned yet.

In addition to Medicare Part D, Dr. Cohen anticipates growing pressure to contain costs coming from patients and other third-party payers, pressure that is bound to spur the demand for cost effectiveness data. Meanwhile Rick Slattery predicts that the impact of the Medicare drug benefit will mean not just that only reasonably priced compounds that can demonstrate good efficacy and good safety profiles will do well, but that marginal products, especially in crowded categories, simply won’t survive. He also points out that right now biotech companies see little need for comparative data since their products often have unique profiles, but that that will change as more similar products enter the market.

Public perception
But the most compelling reason of all may be what Dr. Stephenson refers to as “a huge drop in public confidence in the industry. “A few years ago,” he says, “you could launch a new product and people had confidence in the ethics of drug companies. Today there’s a perception that they’re willing to market their products at all costs. So we see drug companies running more studies, larger studies. Basically they’re saying, if you don’t believe me you can believe my data.”
Finally, there’s a voice we haven’t heard from yet: the voice of the market in response to the recent rash of damaging headlines. Post-Vioxx, Medco reports that between 2004 and 2005 the use of COX-2 agents dropped 65 percent. And after questions were raised about the risk of suicide, the number of children under 19 being treated with antidepressants declined by 13 percent. Codes of conduct, PR campaigns, more sensitive TV ads may all help restore the trust the industry once enjoyed, but nothing is more persuasive than data based on reliable studies. They alone will enable companies to say “we have a safe and effective product that’s an improvement over what’s now available—and we can prove it.”