The RCT blind spot

Randomized controlled trials, or RCTs, are known as the “gold standard” in medical research. This is because RCTs, unlike observational research, randomize participants to an intervention or a control group, which removes most confounding factors that could affect results. 

However, RCTs do have some down-sides, as Dr. Dee Mangin, Professor of Family Medicine at McMaster University, and Dr. David Healey, Professor of Psychiatry at Bangor University, point out in a recent essay in The BMJ. Mangin and Healey argue that RCTs often miss adverse events caused by new drugs, even if the side effects are common. By design, researchers who conduct RCTs are supposed to measure only outcomes that were predetermined before the trial started, to reduce the potential for researcher bias. However, this limits the ability for patients to report adverse events that they suspect were caused by the medication or intervention. As Mangin and Healy write,

“The necessary focus on a primary outcome opens RCTs to a most profound bias: assurance that the trial will deliver only information on what trialists wish to learn about and little to no information on those things that are not proactively assessed, such as most adverse events, irrespective of how common they may be.”

Previous research on adverse event reporting in RCTs shows that adverse events in clinical drug trials are inconsistently reported, even in studies that have enough participants to detect drug safety issues. For drug trials designed to measure drug efficacy, safety issues may be reduced to a single, largely meaningless phrase, such as “the drug was well-tolerated” without including actual data on how many patients suffered adverse events. In a 2018 study of cancer drug trials, 43% were found to include terms that downplayed harms, even though many of these studies contained no information on adverse events at all. 

The lack of adverse event information in RCTs has led to preventable harm to patients, not just because drugs may cause adverse events but because clinicians often don’t believe patients when they report them. Our reliance on RCTs for drug trials, while extremely important for determining the effectiveness of medications, means that clinicians often discount patients’ experiences of adverse events when the RCT did not pick up on the potential harms. 

Healey, Mangin, and their colleague Joanna Le Noury wrote in the International Journal of Risk and Safety in Medicine earlier this year about how patients who reported sexual dysfunction after they stopped taking SSRI antidepressants encountered disbelief, dismissal, and pushback from their clinician. Many patients found that their clinicians were unaware that a side effect of SSRIs could continue after discontinuation, or they did not believe it was possible. The lack of sufficient information on potential adverse events of medications creates a vicious cycle: when clinicians do not know about adverse events, they may not believe their patients could be experiencing an adverse event, so the adverse event never gets reported, so there is incomplete information about adverse events, and so on. In this way, it may take decades for the medical community to acknowledge an adverse event that patients knew about all along.

Healey and Mangin advise that clinicians should be aware of the value of patient cases and not dismiss them as “anecdotes.” They also urge clinicians to report adverse events to drug companies quickly and to pay attention to the often-ignored end of the drug label, which includes important information about potential drug side effects that have been reported to the company. 

RCTs may have flaws, but they still offer considerable advantages compared to observational data. In fact, a recent study in JAMA finds that observational studies cannot answer the vast majority of research questions that RCTs do. And although no study can be completely free of bias, RCTs are often the best way to reduce bias for the endpoints they are measuring. Healey and Mangin raise a very important point about the limits of RCTs to detect adverse events, but the increasing power of drug companies has likely been even more impactful than the increasing reliance on RCTs. As drug companies fund more and more trials, we’ve seen lower adherence to pre-specified protocols, more spin in articles, and poor trial designs crafted to make new drugs look better. We’ve also seen drug companies try to use “real-world data” to get drugs approved based on very low standards of evidence. Both the failures of RCTs and the increasing influence of pharma should be examined closely if we want to improve adverse event reporting and reduce patient harm from adverse events.