Clinical Trial Results Go Unreported All Too Often

By Gil Ross — Feb 18, 2016
A large fraction of studies on humans at major academic centers listed on clinicaltrials.gov are never reported. Over one-third never come to light, and many others take far too long. This leads to a distortion of science-based public health.

ClinicalTrialsMost clinical trials conducted by researchers at U.S. academic medical centers did not report or publish results within two years of completion. A large, new study in BMJ reviewed thousands of preliminary studies and found that an astoundingly large fraction of them never come to publication light, leading to major distortions in public health policy and medical practice.

The article is entitled "Publication and reporting of clinical trial results: cross sectional analysis across academic medical centers," by Harlan M. Krumholz, MD, of Yale School of Medicine, and colleagues.

The researchers used the Aggregate Analysis of ClinicalTrials.gov database and manual review to identify all interventional clinical trials registered on ClinicalTrials.gov, with a primary completion date between October 2007 and September 2010, and with a lead investigator affiliated with an academic medical center. Their goal was to elucidate the proportion of trials that disseminated results, defined as publication or reporting of results on ClinicalTrials.gov, overall and within 24 months of study completion.

They reviewed over 4,300 interventional trials based at 51 U.S. trial-experienced academic medical centers, and the surprising (indeed, disturbing) findings were that only 36 percent reported or published results within two years of study completion (range 16 percent to 55 percent for individual centers), and fully one-third did not report or publish results at all in the following years.

The impact of this lack of data reporting means only a biased slice of research information has been influencing medicine and future research. Lead author Dr. Krumholz told MedPage Today, "We talked to a lot of people about it and we failed to find any single problem. Some people weren't excited about the results, others got busy or distracted, many were small and maybe they use the [data] to inform their next research project."

Krumholz went on to say that some researchers stated, "Maybe the study was so bad we shouldn't report the results," implying that they avoided sharing or publishing results because of potential embarrassment. But he stressed that if participants consented, and experiments were conducted, then the data "need to see the light of day ... if it's good enough to be consented, it's good enough to share the results."

And he went even further: "We were aware that this was a big problem; we've published studies that researchers are slow to publish research on human experiments. We were dismayed that the response to this has been slow in the academic community, and we thought this 'report card' would be useful. The fact that it's so pervasive suggests it's not about bad individuals, it's about a culture that allows for reporting to be discretionary rather than mandatory."

"This is a human subjects' violation," Krumholz explained. "People have agreed to be part of our studies. ... All studies should be completed and reported, but these in particular, are human studies. These aren't studies that have fallen off the tracks. These are studies that were successfully completed. This should alarm everyone."

The authors conclude that "despite the ethical mandate and expressed values and mission of academic institutions, there is poor performance and noticeable variation in the dissemination of clinical trial results across leading academic medical centers."

The reason why the public and academic institutions and public health policymakers should be alarmed about these results is that when null/negative findings are ignored or trashed rather than being published, the medical literature is skewed toward those perhaps relatively few studies that do show an effect of a treatment or behavior.

This is known among scientists as "publication bias." If 90 percent of studies show no effect and 10 percent show some effect, but the 90 percent never see the light of day, obviously a false picture of benefit or efficacy will be the bottom line. This will redound to the detriment of public health: the mandate to publish all human trial data should be rigorously enforced in the name of science and sound public health policy.