Are Medical Journals No-Spin Zones? Not as Often as We Might Want

By Chuck Dinerstein, MD, MBA — May 06, 2019
Two studies look at how you can use words to spin non-significant findings into published studies, and how falsified data spreads unchecked from one meta-analysis to another.
Modified from Black Hill Thermal Imaging Creative Commons License

It is a widely held belief that journals favor studies with positive findings, so what are authors to do when their conclusions are not “statistically significant?”  They use word craft to spin their report of nothing into something. A systematic review in JAMA Network Open makes a strong case. 

The authors looked at randomized clinical trials, RCTs, the golden child of scientific objectivity, in six high impact journals searching for reports with a clearly defined primary outcome and no statistical significance, a p-value >0.05. They identified about 600 studies in 2 years 2015 to 2017, which they winnowed down to 93 meetings these criteria. Spin, a word associated perhaps more with Bill O’Reilly, was defined as 

“use of specific reporting strategies, for whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results.”

  • Spin was everywhere, beginning with 11% of titles, and in roughly 40% and 50% of the results and conclusion sections of both abstracts and full text.
  • Roughly 60% of abstracts or full text had at least one section of spin, in 26% of studies spin was found in every part of the report.
  • The presence of spin did not correlate with stated conflicts of interests.
  • “… industry-funded research had a lower proportion of spin than nonprofit funded research.” (27% of studies funded solely by “for profit,” and 40% for public funding).
  • In the results section, authors focused on what was statistically significant, typically a secondary outcome or some variation within groups.
  • In the discussion and conclusions, authors would continue to maintain that shift in focus or suggest that the non-significance denoted some equivalence in benefit or harm with the control group.
  • “in approximately 67% of CV (cardiovascular) RCT reports, the reporting and interpretation of outcomes is inconsistent with actual results in at least 1 section of the article.”

We can, like the authors, speculate on the cause of this particular form of wordsmithing; misaligned incentives on the parts of journals and authors, or even a natural desire to show something positive for a great deal of work. But it is clear that one must be a careful reader and looking solely at an abstract, let alone a press release is often insufficient in appraising the findings of a study.

The articles reviewed were subsequently cited a median of 7 times after their publication, amplifying their spin. In a separate research letter, a journal version of a short report, another group of researchers tracked the propagation of falsified data, admittedly the most severe form of “spin,” through their typical host, the meta-analysis. 

Apixaban, a novel, e.g., new type, of anticoagulant, was compared to the standard use of warfarin (Coumadin) in treating atrial fibrillation, an arrhythmia of the heart, in a widely publicized clinical trial for efficacy. The FDA specifically found falsified data in the clinical trial, altering the outcome of the study from a beneficial effect of apixaban over warfarin to a more neutral position, let’s call them equivalent. [1] The authors of the research letter found this known flawed study, subsequently cited in 22 English language meta-analysis papers.

Almost 50% of these meta-analyses had conclusions that would be altered by eliminating the questionable study. And in a third of the cases, reanalysis resulted in a very different outcome, more specifically apixaban was no longer favored.

Words have meaning. These two studies, one describing spin, the other the propagation and amplification of falsified data should be treated as cautionary tales. Not everything is spin and falsified data is not the norm, but even small amounts provide an opportunity to discredit and tarnish the authority of science. If we are not careful, we will distrust science as we distrust politics, and that would make our policies, and regulations much less certain. 

[1] Let me hastily point out that the data was from a clinical center in China.

Source: Level and Prevalence of Spin in Published Cardiovascular Randomized Clinical Trials Reports with Statistically Nonsignificant Primary Outcomes JAMA Network Open DOI: 10.1001/jamanetworkopen2019.2622

Evaluation of the Inclusion of Studies Identified by the FDA as Having Falsified Data in the Results of Meta-analyses: The Example of the Apixaban trials JAMA Internal Medicine DOI: 10.1001/jmainternmed.2018.7661

 

Chuck Dinerstein, MD, MBA

Director of Medicine

Dr. Charles Dinerstein, M.D., MBA, FACS is Director of Medicine at the American Council on Science and Health. He has over 25 years of experience as a vascular surgeon.

Recent articles by this author: