The “confounding” effect of compliance bias
Are randomized controlled trials the best way to test new interventions? Or can we make trials faster and easier by deviating from the RCT model? Despite the recent enthusiasm from some researchers and industry leaders for using observational data instead of conducting RCTs, there are some important reasons why randomization matters. One of those reasons is compliance bias.
What is compliance bias?
In a recent study in JAMA Internal Medicine, researchers from the National Cancer Institute found that patients that did not get cancer screenings as recommended by their physician were 50% more likely to have died of unrelated causes 13 years later. Is this because cancer screenings have preventive health properties beyond detecting cancer? Or is it because patients who follow their doctor’s recommendations on cancer screening are somehow different from patients who don’t?
The authors concurred with the latter explanation, writing that non-adherence to screenings does not itself cause death, but that it is a “marker for a general behavioral profile of nonadherence to medical tests and treatments associated with increased mortality.” This marker of behavioral differences between patients is known as “compliance bias,” JAMA IM editors Dr. Deborah Grady and Dr. Monica Parks explain in an accompanying editorial. Compliance bias “confounds” the study results, by making it seem as though not getting screenings leads to mortality; in reality, both screening adherence and mortality are associated with other factors associated with mortality.
What does compliance bias have to do with randomized controlled trials? Without randomization, you can’t account for confounding factors like compliance bias. If you are comparing study results from two groups, one of which adhered to a drug regimen or other behavior and one that did not, you don’t know whether your result has been confounded by compliance bias unless the two groups have been randomized ahead of time.
Another important takeaway Grady and Parks point out is that studies may overestimate the benefit of interventions that involve patient adherence, because patients who follow protocol are less likely to die overall. This means that if we want to account for compliance bias, we should be comparing adherent patients in intervention groups with other adherent patients in control groups using a placebo intervention.
What does compliance really mean?
The comparison between “adherent” and “non-adherent” patients in this study depends on how we define this term. In Grady and Parks’ editorial, they describe compliance bias as a “a marker for behaviors that are associated with increased mortality,” (emphasis mine) such as seeking out vaccines and other preventive care. But what if non-adherence is not a behavioral problem but a structural one? Is it possible that the compliance bias is not necessarily a patient characteristic (valuing preventive health) but a result of the way in which the patient interacts with the health care system?
That’s what Dr. Victor Montori and others who advocate for “minimally disruptive medicine” believe. Despite the relatively low rate of non-compliance in the JAMA IM study above, non-adherence is actually a common problem. About 2/5 of patients don’t follow medical advice in some way, and not following a doctor’s advice is the third most common cause for patients being “fired” from clinical practices. From a clinician’s perspective, it seems impossible to work with patients who aren’t taking their medications, getting scans, or changing their lifestyle for their health.
However, as Montori points out, patients are being asked to do a lot for their health, and these added responsibilities may not fit into patients’ lives, which are already busy and complex. For example, a patient with several chronic conditions may be referred to many different specialists; the patient then has to take time off work and secure transportation and child care for multiple visits to different providers, which is difficult. However, when patients are unable to follow doctors’ advice, we escalate treatment and ask them to do more things, which increases their treatment burden.
“We use the same treatment goals for patients despite differences in context. And when we don’t tailor to context, the programs don’t fit. And then the patients don’t do it,” said Montori. “Maybe, by declining our treatments, our patients are trying to let us know that there is something amiss.”
As we study confounding factors like compliance bias, we need to be critical of the framing that focuses only on behavior, and doesn’t take into account social factors and interactions with the health care system.