Outcome misreporting goes ignored by top medical journals

Outcome misreporting is misleading and can introduce bias, which is why medical journals have taken steps to prevent it. The Consolidated Standards of Reporting Trials (CONSORT) is a list of recommendations for reporting randomized trials that many medical journals have endorsed. However, despite journals’ stated commitment to CONSORT, outcome misreporting is widespread in medical research. What accounts for this discrepancy?

COMParing the trials

In the recently published papers from the COMPare project, researchers from the Centre for Evidence-Based Medicine at the University of Oxford set out to find out how prevalent outcome reporting errors are in major medical journals, and how these journals respond to criticism.

The COMPare study is the first to systematically analyze how journals respond to requests for corrections. Researchers analyzed all trials published over a period of six weeks in five top medical journals that endorse CONSORT (the journals were JAMA, Annals of Internal Medicine, The BMJ, The New England Journal of Medicine, and The Lancet). They noted all trials that had an incorrectly reported outcome, and sent letters to the journals for each discrepancy that needed correcting. The results of this landmark trial should make us concerned about how medical journals are monitoring outcome reporting. Here’s why:

Outcome misreporting is very common

In all, the COMPare researchers assessed 67 trials and wrote 58 letters, meaning that most of the trials published had a reporting discrepancy. Nearly one quarter of primary outcomes in the trials were incorrectly reported in the study compared to the protocol or clinical trial registry. Almost half of the trials did not even have a pre-specified protocol available. There were also 365 new outcomes reported (5.4 per study, on average) that weren’t pre-specified in the protocol or trial registry, most of which were not declared to be new outcomes.

This is very worrying because publishing different outcomes than originally planned can introduce bias into research. For example, let’s say I want to conduct a study on whether dogs sleep better if you read a book to them. Before I start the study, I need to “pre-specify” how I’m measuring sleep quality (let’s say I choose hours of sleep). After I conduct the study, I cannot look at the data and decide to report a different outcome (like snoring volume) that makes the intervention appear more significant. At the very least, I need to write in the study why the outcomes I’m reporting are different from the pre-specified outcomes. 

However, in most cases, study authors did not match the outcomes they reported with outcomes previously declared in a study protocol or on clinical trial registries. They also did not generally disclose when primary outcomes had changed, or when novel outcomes were added. This could make it very easy for researchers to introduce bias into the study. It’s bad science, plain and simple.

Journals don’t seem to care

The prevalence of outcome misreporting is an important finding, but even more interesting were the responses from journals about the letters. Even though all five journals claimed to be CONSORT-compliant,  the responses to the researchers’ requests for corrections were lukewarm at best. Out of the 58 letters sent, 32 were rejected, 18 were published within four weeks, and 8 were published after four weeks. Two journals (JAMA and NEJM) did not publish any of the letters. 

COMPare researchers noted that much of the feedback they received about the corrections contained misconceptions about CONSORT, which is worrying. For example, the editors of JAMA wrote back saying they didn’t see a problem comparing reported outcomes to trial protocol that are published along with the study (rather than before the study starts).

Both JAMA and NEJM used space constraints as an excuse for why not all pre-specified outcomes could be reported, but as the COMPare team points out, these journals were fine publishing manuscripts with new outcomes that weren’t pre-specified. Editors of NEJM also told the COMPare team that it was fine that outcomes did not match those in the trial registry because readers could just look them up themselves (the COMPare team notes that it took them between 1 and 7 hours per trial to do this). 

The editors of the Annals of Internal Medicine published the correction letters but also wrote a response to COMPare with several misconceptions about CONSORT. They insisted that study authors don’t need to disclose when they’ve changed an outcome and that they should be able to use trial protocol that are not pre-specified. 

Journals are supposed to the gatekeepers separating high-quality evidence from poor science, but it appears that when it comes to outcome reporting, they are shirking that responsibility.