What does overdiagnosis mean? If you’ve been following the Lown blog these past few weeks, you’re probably aware that overdiagnosis is a potential harm of too much cancer screening. But how do we define overdiagnosis, and what makes it different from other screening harms such as false positives or overtreatment?
In a special article in the Annals of Internal Medicine, Dartmouth Institute researcher Dr. Louise Davies and colleagues clarify what we mean by overdiagnosis in cancer screening, how we can measure overdiagnosis, and how overdiagnosis is different from other forms of overuse.
The authors define overdiagnosis as “the detection of a histologically confirmed (confirmed through biopsy or surgery) cancer through screening that would not otherwise have been diagnosed in a person’s lifetime had screening not been done.”
To understand exactly what this means, it’s important to point out what it doesn’t mean. Overdiagnosis is not the same as a false positive, where a screening test comes out positive but a biopsy reveals that it’s not cancer. Overdiagnosis is not misdiagnosis, where cancer is incorrectly diagnosed. Overdiagnosis is not the same as overtreatment, which is treating cancers that never would have caused harm or that were misdiagnosed. Overdiagnosis is also different from finding cancers that would have later been caught, but that would not have benefited the patient to treat (a “grenade” type cancer, or a cancer with no effective treatment, for example).
This handy graphic from Dr. F. Perry Mason‘s blog on Medpage Today helps show the differences in these types of unnecessary care in cancer screening.
According to Davies et al, overdiagnosis can happen through two mechanisms — maldetection and misclassification. Maldetection is when a test is unable to distinguish between clinically important and unimportant cancer, so a cancer that would never have otherwise been detected gets diagnosed. Misclassification is when a disease exists on a spectrum and the threshold for diagnosis encompasses people whose condition isn’t clinically important (see the debate on PCOS overdiagnosis, for example).
There’s an obvious difficulty with the definition proposed by Davies et al — how do you measure overdiagnosis when you can’t tell which cancers would have been found later? As Mason points out, we can measure overdiagnosis on a population level, but “we can never really know if a given individual was overdiagnosed.”
Davies et al advocate use of the following calculation to determine an overdiagnosis rate: The number of overdiagnosed cancer cases/number of people screened. Because overdiagnosis cannot be directly measured, the authors recommend several methods for estimating overdiagnosis on a population level: comparing the number of cancer cases in a screened population to number of cases in a similar but not screened population; using population data to project incidence trends forward in time; and measuring the time between cancer detection by screening and when the cancer would have been detected clinically using modeling.
All of these methods have some disadvantages and difficulties in executing. The authors’ recommendation is to use data from randomized controlled trials to compare cancer rates in screened vs not screened populations, as long as the study has a long follow-up period and minimal to no screening in the control group. They also emphasize the importance of communicating the study design when giving estimates of overdiagnosis, so the methods can be evaluated.
Is overdiagnosis inherently bad? Overdiagnosis not only leads to overtreatment but creates stress for patients who have to deal with this alarming diagnosis. However, avoiding overdiagnosis with less screening means by definition that we will find fewer cancers – some of which may have been beneficial to find. This is why being able to define and measure overdiagnosis is so important – because it gives doctors and patients context to make an informed decision about screening.
The authors give tips for communicating with patients about the risks of overdiagnosis. One interesting tip was to use language about outcomes rather than lives or years lost, since people are naturally loss-averse and will “express preferences inconsistent with their values when the decision is phrased as loss.” Instead, use pictographs or similar decision aides to show the benefits and risks of screening.
The authors also recommend telling stories whenever possible; there are plenty of stories from people who believe their lives were saved by cancer screening, but few about people who were harmed from cancer screening. Telling patients how many cases are overdiagnosed from screening compared to how many lives saved are saved is another way to communicate the potential harm of overdiagnosis. This information should help clinicians start much-needed conversations about overdiagnosis with patients.