Site icon Lown Institute

Why AI? Questioning the role of artificial intelligence in health care

Why AI? Questioning the role of artificial intelligence in health care

Artificial intelligence (AI) in health care is a fast-growing business. In 2017, Americans invested nearly $100 million in health care AI companies, about three times as much as the amount invested in 2011. While we don’t expect robots to replace doctors anytime soon, new methods of data analysis and pattern recognition have the potential to help doctors diagnose diseases, read scans, and make decisions about treatments. Many health care leaders have called the AI boom the start of a “revolution” in health care; others, however, are more skeptical. 

The data problem

One issue is that these algorithms depend on having lots of real-world data, which they use to recognize patterns and make predictions. Although health systems have been using electronic health records (EHRs) for decades, these data usually cannot be shared and aggregated across EHR systems, because the records are not interoperable. Clinical trial data are rarely collected and submitted in a standardized format, making these data difficult to share and move. Collecting data is so difficult that it took Google a year to extract about 1,700 patient cases from medical records to test their algorithm for lung cancer screening. 

Can AI change behavior?

But even with the best data, how much will AI do for doctors and patients? In a recent JAMA Viewpoint, Dr. Ezekiel Emanuel, oncologist and bioethicist at the University of Pennsylvania, and Dr. Robert Wachter, professor of medicine at the University of California — San Francisco, explain why they don’t think AI developments on their own are going to significantly change the practice of medicine or health outcomes.

“A narrow focus on data and analytics will distract the health system from what is needed to achieve health care transformation: meaningful behavior change,” write Emanuel and Wachter. Just because research shows that a certain practice is best for patients does not meant that clinicians will do it. As we know from the continuing prevalence of waste, overuse, and underuse, changing clinician behavior is tough. Procedures and tests not recommended by specialty groups, such as episiotomies, are still performed at high rates in some hospitals. It is unlikely that AI decision-making tools will get doctors to change their behavior when years of research and educational campaigns have been unable to do so on a large scale. 

AI also does not do much to change patient behavior, which is much more likely to affect health than what happens at the doctor. As other clinicians have pointed out, knowing one’s risk for disease has not been shown to change patient behavior. Why would a patient decide to quit smoking after learning they have a six-fold increased risk of lung cancer through an AI prediction, if they haven’t already been convinced to quit by the 20-100x greater risk of lung cancer from smoking? 

Rather than invest in tools to identify patterns and predict disease, AI efforts should focus on “thoughtfully combining the data with behavioral economics and other approaches to support positive behavioral changes,” Emanuel and Wachter write.

AI and health disparities

Another potential downside to AI lies in its power — the ability to recognize patterns and predict outcomes. Racial and ethnic health disparities are embedded in our health system, and in the data we collect, meaning that AI would pick up these patterns and likely reproduce them.

“Because A.I. is trained on real-world data, it risks incorporating, entrenching and perpetuating the economic and social biases that contribute to health disparities in the first place,” writes Dr. Dhruv Khullar in The New York Times. We’ve already seen this happen in AI programs that assess risk in criminal justice sentencing.

“In medicine, you’re taught to stereotype,” said Dr. Damon Tweedy, Associate Professor at Duke Medical School and author of Black Man in a White Coat, at the Atlantic Pulse conference. Doctors are often taught to assume things about patients based on their age, race, and gender. Then these assumptions are written into their medical record and perpetuated in future visits with other doctors.

Doctors like Tweedy are trying to stop this pattern by recognizing their biases in the moment, and by training other doctors to do the same. However, as Khullar points out, AI could make these biases “automated and invisible.” We may “begin to accept the wisdom of machines over the wisdom of our own clinical and moral intuition,” he warns.

The runaway train

Although AI has a long way to go before being a mainstream part of medical practice, some algorithms are already being implemented in direct-to-consumer apps, with little to no regulation. For health apps that seek to diagnose patients, the FDA has demurred on regulating those deemed “low risk,” writes Michael Millenson, adjunct associate professor of medicine at Northwestern University, in The Conversation.

This is important because about one third of adults seek a diagnosis online, yet the effectiveness of consumer-facing diagnostic apps is highly inconsistent. According to a 2018 review of the evidence on DTC diagnostic digital tools, the evidence base is “sparse in scope, uneven in the information provided and inconclusive with respect to safety and effectiveness, with no studies of clinical risks and benefits involving real-world consumer use.” Reviewing the evidence that is available shows wide variety in apps’ functionality, accuracy, safety and effectiveness. Not exactly a glowing review.

As to why these apps are not being regulated, experts are shaking their heads. “I would love someone to explain to me how, exactly, low-risk is calculated,” said Millenson in an email correspondence. “I guess if you say, ‘This is not medical advice,’ you’re home free?”

Exit mobile version