Site icon Lown Institute

Investigating the “spin cycle” in cardiovascular research

Investigating the “spin cycle” in cardiovascular research

Academics, clinicians, and others who study medical research have noticed a disturbing trend in clinical trials in recent years: Negative trials that sound positive. 

A randomized controlled trial is considered “negative” if the primary endpoint being measured is not significantly improved in the intervention group compared to the control group. A negative trial does not mean the trial was bad or not worth publishing; in fact, some of the most influential trials in recent years were ones that showed that a medical intervention did not actually work better than a placebo (think ORBITA trial).  

However, it is concerning when a trial that does not meet its primary endpoints is “spun” to seem as though the results were positive. This happens more often than you might think. The authors of the 2018 CABANA trial measuring the effectiveness of cardiac ablation faced criticism when they put a positive spin on the results when the primary outcomes were negative. A 2014 study found that 53% of trials for a certain type of lung cancer from 2001 and 2010 reported a positive outcome without meeting their primary endpoints.

How often do trial authors spin results? A team of researchers, led by Dr. Muhammad Shahzeb Khan at the John H. Stroger, Jr. Hospital in Chicago, Illinois, recently published a sobering study in JAMA with an answer for one specialty. Khan et al reviewed nearly 100 cardiovascular randomized clinical trials published in “high-impact journals” between 2015 and 2017 that had negative primary endpoint results, to see how many of them included positive spins. They found that 67% of the negative cardiovascular RCTs contained spin within the main text of the article and 52% contained spin in the abstract. More than ten percent of articles had spin in the title of the study. That’s enough to make you dizzy!

How do trial authors “spin” negative results into something positive? The study authors describe some specific “spin strategies” that were common in RCTs. One is to redirect attention away from discussing the negative primary results by focusing on positive secondary endpoints or subgroup comparisons (as the CABANA presenters did). Another way to spin results is by saying that “Intervention X” is a safe and reasonable alternative to the existing standard of care because it did not perform significantly worse than the control group in the study (even though it didn’t perform significantly better). Last, but not least, some authors simply say that the treatment was beneficial and don’t mention the fact that it wasn’t significantly more beneficial than the control group.

Why is spin so prevalent in cardiovascular RCTs? One might assume that financial conflicts of interest play a role, but the study authors did not find an association between level of spin and conflict of interest disclosures from the first and last authors. In fact, industry-funded research had a lower proportion of spin than nonprofit-funded research. This could be because financial conflicts of interest are not always disclosed. Or it could be that other forces lead to positive spin, such as pressure to publish in high-impact journals. 

There is a “well-known bias of scientific journals and lay press toward positive results,” writes Stephan Fihn, Deputy Editor of JAMA Network Open, in an accompanying editorial. To combat this, journals should “make every effort to embrace well-founded, negative findings as avidly as positive ones,” writes Fihn. 

It is the responsibility of journals, especially “high-impact” journals, to publish good science, whether or not the result of the study is positive or negative, and to reject articles that aim to spin negative results into positive ones. And although it should not be consumers’ responsibility to fact-check already peer-reviewed articles, it helps to be aware that spin is prevalent in clinical research. 

Exit mobile version