Tipping Point Analysis: Why Peer Review Didn't Catch Drug Study Issues That FDA Did

By Chuck Dinerstein, MD, MBA — Jun 08, 2018
The FDA used Tipping Point Analysis to show that an important study of cholesterol-lowering medications is incorrect. So who got it wrong? The FDA or the New England Journal of Medicine, which peer-reviewed the work?
Courtesy of Iliveisl

If lowering the LDL cholesterol (bad cholesterol) helps your cardiovascular health than medications that lower LDL to a greater degree should afford your “better health.” That is certainly the thinking behind Repatha and an older study of the combination of drugs acting to both block the absorption of cholesterol from the GI tract (ezetimibe) and lower the blood levels of LDL (simvastatin). The IMPROVE-IT study found that the combination “resulted in a significantly lower risk of cardiovascular events,” quantitatively a 2% reduction or 6.3% relative reduction.  Despite these published results, the FDA denied market-approval for the combination, and it is the reasoning behind the denial that is the subject of a short commentary in the Journal of General Internal Medicine.

The IMPROVE-IT Study

The IMPROVE-IT study was undertaken for a six-year period to capture the full clinical effects of the drugs on the typical cardiovascular concerns, death, heart attack and stroke. It was a large collaborative study (1158 enrolling centers in 39 participating countries), physicians put a lot of effort into making this determination. Unfortunately, for a variety of reasons, a large number of the study participants (approximately 35%) left the study very early, during the first year when many of the clinical endpoints of concern occur. So there is a dilemma because replacing those participants will take more time adding more years to an already six-year study. The alternative for the missing participants and their subsequent cardiovascular health is to “impute” their values – replace the actual findings with reasonable estimates, in this case, a 6.6% incidence of clinical endpoints. The authors did note the missing data as a study limitation, and imputation of missing values is a standard statistical technique. That said, replacing a third of the data must give us pause and from a clinical viewpoint, a two-drug combination providing a 2% improvement if closer to being "more of the same" than a clinical "breakthrough." The term significantly has different meanings to clinicians and statisticians. 

The FDA

The current paper highlights how the FDA considered the reliability of the substituted information for the missing data, using a technique called Tipping Point Analysis - a statistical means of sensitivity analysis. Because all studies have to make assumptions and estimates, one way to show that your findings are valid (robust is the jargonized version), are varying the assumptions and estimates to see whether your result changes. If results shift, then they are sensitive to your premises, and the validity of the findings rests on whether you believe those assumptions and estimates.

Tipping point analysis is the same process focusing on the missing and now substituted data. The missing data is replaced with a range of values, and you look to see how far you must change them for the results of the study to tip from significant to not. The FDA's experts felt that the researchers’ replacement value of 6.6% was too low a clinical estimate. In fact, using the actual rate of poor outcomes during the first year (13.5%) in place of the researchers’ 6.6% resulted in no statistical difference in the treatment versus control groups. The findings, in the view of the FDA, were simply not robust against the assumptions being made. They believed their estimates, based upon the actual data of the study, was more appropriate than the ones selected by the researchers. 

The details of the how the substituted values were derived are not to be found anywhere in the article or the supplemental information. The author's only statement regarding the missing data was not how they replaced the missing data but with a more forward-looking statement, “if adherence had been higher, one might anticipate that a greater clinical benefit might have been seen.” I understand the rush to get this information to physicians; I understand the frustration of having to continue to coordinate and collaborate for an additional year or two. You can feel the impatience and the reason in this statement buried in the supplementary report regarding adverse effects of the drugs,

“Based on the fact that these additional events were highly unlikely to affect the primary and key secondary efficacy and safety endpoints, a primary database lock was conducted to support the AHA presentation and publication. [Emphasis added]”

So I am left with two questions. Why should all of this work be compromised by an arbitrary deadline for presentation when an interim analysis could have been made? And is this another example of how peer-review has failed clinicians? 

Sources:

If the IMPROVE-IT Trial Was Positive, as Reported, Why Did the FDA Denied Expanded Approval for Ezetimibe and Simvastatin? An Explanation of the Tipping Point Analysis Journal of General Internal Medicine DOI: 10.1007/s11606-018-4498-3

Ezetimibe Added to Statin Therapy after Acute Coronary Syndromes NEJM DOI:10.1056/NEJMoa1410489

Chuck Dinerstein, MD, MBA

Director of Medicine

Dr. Charles Dinerstein, M.D., MBA, FACS is Director of Medicine at the American Council on Science and Health. He has over 25 years of experience as a vascular surgeon.

Recent articles by this author: