Quality performance measures are the hottest new thing in medicine. Payers are increasingly tying payments to measures of quality – just this January, Medicare began requiring physicians who participate in their Merit-based Incentive Payment System (MIPS) to choose from among many new quality performance measures. In theory, paying doctors for performing well on the measures is a triple-win – patients are safer and healthier, insurers pay less in the long run, and doctors get a bonus. The only problem? Many of these measures aren’t actually correlated with outcomes patients and doctors care about.
Frustration with inappropriate quality measures drove Dr. Ronald Adler, Associate Professor of Family Medicine at UMass Medical School, to create Care That Matters, a primary care physician advocacy group dedicated to promoting more meaningful quality measures – and eliminating inappropriate measures. “Most of the measures we were getting graded on didn’t correlate with outcomes that matter: better health for our patients or lower costs for our patients or the system as a whole,” says Adler. He hated spending time on unhelpful measures when he could be interacting with patients in more meaningful ways. Adler and his colleagues wanted to draw attention to the lack of evidence behind these measures.
So Adler partnered with Dr. Alan Drabkin, Section Editor for Family Medicine at DynaMedPlus and Assistant Professor at Harvard Medical School, and Dr. Courtney Scanlon and Dr. Brian Randall from Tufts University School of Medicine, to conduct an evidence-based critical appraisal analysis for quality performance measures. They presented this research at the 2017 Lown Conference and received an Audience Choice Award.
In their abstract, “Checking the Check Boxes: An Evidence-based Review of Quality Performance Measures,” Drabkin, Adler, and colleagues evaluated a subsection of the MIPS quality performance measurements. Among the 65 MIPS quality measures for Primary Care, only 20 met their criteria for an appropriate measure. Twenty-one were not supported by evidence that the measure is appropriate and that the benefits of performing the action outweighed the costs or other harms. Another 24 had evidence of effectiveness and benefit, but weren’t specified adequately enough to allow for reliable implementation.
Adler and Drabkin point out that requiring completion of inappropriate measures not only wastes time for physicians, it can also cause real harm to patients. For example, if a doctor has an elderly patient whose blood pressure is slightly above the performance target, she could benefit from giving the patient a medication to reduce blood pressure, even though that would increase the patient’s risk of falls. “It’s very problematic when a quality measure pits the interest of the physician against the interests of the patient,” says Adler.
Why do we have so many guidelines that are harmful or ineffective? Adler and Drabkin say it’s an issue of putting the cart before the horse. Everyone is on board with measuring quality, so they propose more and more measures, with no one stopping to evaluate them. “There’s a compelling case to measure, so we have grasped on anything we can measure,” says Drabkin, “Things often get implemented before they’re properly vetted.”
The research in this abstract is just the start of a larger project – applying this analysis to all 271 MIPS measures across 28 specialties before making this information public. The team intends to create a searchable tool that physicians can use to make informed decisions about which quality measures are most appropriate to choose for their own practices.