Site icon Lown Institute

Leveraging AI to reduce health disparities: A closer look at the possibilities

Leveraging AI to reduce health disparities: A closer look at the possibilities

To explore the implications of segregated hospital markets and learn from hospitals taking steps to prioritize racial inclusivity, join us for our September 19th event, America’s Most Racially Inclusive Hospitals, 2023 at 1:00 PM EST.

Artificial intelligence (AI) has become a key tool for industries to boost productivity and reduce administrative burdens. As health equity has become a central focus for hospitals and other healthcare institutions, how could AI help – or hinder – these efforts?

What is AI?

At its core, AI operates in a similar way as humans. Just as we read and consume information, pick up patterns, learn from experience, and solve problems, computers can mimic these behaviors. If you’ve ever communicated with a chatbot, talked to a Siri or Alexa, or used autocorrect and other text editors, you’ve made use of artificial intelligence.

Some AI programs are programmed to have pre-determined rules and outcomes (traditional, or rule-based AI). For example, a rule-based AI could be a program that flags certain patients, based on pre-prescribed symptoms and risk factors present in their medical record.

Other AI programs are provided with large amounts of data to “train” the computer, allowing it to develop a knowledge base that can evolve and expand as it is introduced to more information. When posed with questions or requests, AI systems comb through their knowledge base to search, synthesize, and report information. These are known as data-based, or machine-learning algorithms. Some programs use both approaches together.

How healthcare systems are using AI

Much of healthcare’s use of AI has focused on record-keeping, clinical research, and patient diagnoses and linkages to care. Leveraging AI has allowed medical providers to create individualized treatment plans, predict and map epidemics, flag certain patients for high risk of COVID-19 complications, and take notes during visits automatically.

For example, Microsoft and Epic – the electronic health record (EHR) platform – have partnered to use AI to streamline clinical summarization and documentation efforts. The Mayo Clinic will utilize a Google cloud generative AI tool to provide staff with immediate and expansive access to clinical information and research. And on the health equity front, companies are popping up to promote AI tools that identify and measure health disparities and track progress on health equity metrics.

Efficiency versus accuracy

Although AI has great potential, these programs have been criticized for exacerbating structural and systemic inequities. AI algorithms are generated within the confines of structural inequities that devalue communities based on aspects of their identities, including race, ethnicity, gender, sex, etc. AI’s knowledge base is then infused with these biases

Here’s a real-world example of how that happens: Health systems were using an algorithm to predict which patients had the greatest health needs. The model used healthcare spending as a proxy for illness, under the assumption that patients with the highest healthcare utilization had the greatest need. Researchers later found that there was substantial bias in the model – because Black patients were less likely to be able to access care (even when they needed it), they were deemed to have lower health needs. As a result, these patients were getting overlooked, with only 18% receiving additional help when 47% needed it. 

This is not an isolated incident. AI has proven far more successful in diagnosing cystic fibrosis than sickle cell disease, a health condition that disproportionately affects Black communities and suffers from an overall lack of research, despite the conditions’ similar genetic nature and severity. A study of AI diagnostic algorithms for chest radiography found that underserved populations (which are less represented in the data used to train the AI) were less likely to be diagnosed using the AI tool. Researchers at Emory University also found that AI can detect patient race from medical imaging, which has the “potential for reinforcing race-based disparities in the quality of care patients receive,” according to Emory radiologist Judy Gichoya.

Technology with accountability

Having identified the ways AI can reinforce inequities, we must prioritize the strategic use of AI. Organizations and researchers alike have taken steps to hold technology accountable and ensure AI is used ethically. Here’s what some of our hospital systems and policymakers are doing now:

Moving forward, here are some strategies that organizations can use to prevent AI from driving more disparities:

Education

Regulation

Research

As we continue to leverage AI as a tool of efficiency and productivity, we must be intentional about its use and anticipate the impact it will have on society. It’s time that the healthcare system at large takes the necessary steps to use the power of AI for good.

Exit mobile version