The Journal of the American Medical Association‘s January issue addresses the FDA approval process, and among the items to note are some of the whys behind an FDA rejection as well as the hanging threads that accompany an FDA drug approval.

Among the findings: the FDA does not use the same rules for all medications. This is not a surprise for the rare-disease community, whose drugs go through a review process that has no other choice but to include small patient populations, yet the variability among approval criteria goes beyond clearly defined groups. Researchers found, for example, that the FDA’s guidance recommends at least two “pivotal trials” that show efficacy, but that the features of these trials differ by therapeutic category (i.e., randomized or not) and that four novel therapeutic agents, out of the 206 approved between January 2005 and December 31, 2012, were approved without these trials.

Researchers also said that surrogate endpoints were leveraged in about half of the approvals in the surveyed time period. This gave them pause because “reliance on surrogate outcomes leaves patients and physicians to extrapolate clinical benefits from trials, again raising questions about the certainty of the medications’ benefits in practice.”

This concern speaks to a larger issue that has been kicked around among patients and the industry alike, which is that clinical settings and the real world are not the same. The researchers—who write that flexibility is what allows the FDA to get needed treatments into the marketplace—see an opportunity for change and recommend the “life-cycle approach” the Institute of Medicine suggested in its 2006 “Future of Drug Safety” report. The goal is to monitor drugs’ risks and benefits after they reach patients in the larger marketplace and to use this feedback loop to assess safety and efficacy and communicate these findings to HCPs and patients.

They also float the idea of FDA using a grading system that would let patients and physicians distinguish between drugs that were approved using more (or less) robust data.

A companion report found that the FDA generally rejects drugs if there are unanswered questions, such as the optimal dose, or because of lacking data including inconsistent trial results or end points that fail to convey a clinically meaningful effect.

These researchers found that the FDA approved 73% of new molecular entities (NMEs) after a single round of review between 2000 and 2012, and that about 24% had to gather more data before being approved, which tacked on a median of 435 days before the drug could be approved.

Of the two studies, this was one editorial writers Steven Goodman and Rita Redberg found much to take issue with, partly because things like dosing and trial design are known before drugs are put up for review and that more information—such as whether the FDA changed requirements mid-stream or if the manufacturer ignored the agreed-upon plan—are necessary for context.

They also take issue with the report’s failure to discuss how many drugs the FDA restricted after approval, particularly in 2009, a year in which the “FDA approved 181 major safety related actions, including 25 black-box warnings and 19 contraindications,” they write. These would bolster the life-cycle approach researchers mentioned, but Goodman and Redberg also note an inherent problem with post-market feedback: “When postmarket studies do show safety problems or ineffectiveness, it is difficult to change practice (e.g., Avastin for breast cancer).”