Can AI help reduce medical debt? It depends on how hospitals use it…

Artificial intelligence (AI) tools are increasingly being used in healthcare settings, aiding in everything from cancer screening to patient questions and more. In fact, most hospitals are already using AI tools for “revenue cycle management” (a fancy way of saying “collecting money from insurers and patients”). 

Could AI and other machine learning help ameliorate our nation’s medical debt crisis—or will it make the problem worse? It all depends on how hospitals use these tools, write Kelsey Chalmers, PhD, our own Director of Research, Data Science at the Lown Institute, and Christopher W. Goodman, MD, Clinical Assistant Professor of Internal Medicine at the University of South Carolina School of Medicine, in a recent JAMA Internal Medicine viewpoint

“Ultimately, whether vendor relationships with hospitals improve charity care comes down to a simple question: are these services meant to improve hospitals’ revenue or protect patients from medical debt?” 

-Christopher Goodman, MD, and Kelsey Chalmers, PhD, in JAMA Internal Medicine

Using AI for good

Here’s how hospitals could use the same tool for different purposes. Goodman and Chalmers give a hypothetical example of a single mother making $35,000 a year who is hospitalized briefly due to a mental health crisis. Although she would likely qualify for free care under most nonprofit hospital policies, in this case she does not apply (often patients aren’t told about financial assistance options, or find it hard to complete long applications for assistance while managing a health emergency). 

Fortunately, the hospital makes use of an algorithm that automatically screens patients who are eligible for assistance based on their income and assets. The hospital proactively informs the patient that she qualifies for free care, which allows her to avoid medical debt and access outpatient mental health services after she leaves the hospital.

The authors point out that many hospitals have language in their financial assistance policies that indicate that they use third-party tools to identify eligible patients for charity care. For example, Grady Health in Georgia has written in their policy: “At the time of registration (during address verification), every patient is electronically assessed for a Federal Poverty Level ranking through (presumptive) automated third-party software. If the automated system determines a FPL level between 0 and 400%…the patient is then auto-qualified for the corresponding discount level.” In this case, AI tools ensure that no patients miss out on the discounts for which they are eligible.

When AI makes the problem worse

However, Goodman and Chalmers also note that AI tools can be used to increase hospital revenue rather than prevent medical debt. Take the example of the single mother making $35,000 who visited the hospital. What if instead of identifying her as eligible for financial assistance, the hospital’s third-party tool found her good credit score and previous paid hospital bills and decided that she has a “high likelihood of payment”? This label could lead the hospital to pursue payment from her aggressively, putting her in debt. 

Some hospitals have concerning language in their financial assistance policies (like “ability to pay” or “propensity to pay”) that indicate they could be using third party tools to identify patients for collection actions. For example, a health system in Indiana writes in their policy that financial need determination may “include the use of external publicly available data sources that provide information on a patient’s or a patient’s guarantor’s ability to pay” for patients above 200% FPG. The policy also includes a definition of a “collect ability score” as “the number assigned to the probability of collecting $50 or more within 12 months on patient balances,” but does not clarify all the ways in which the hospital uses these scores.

Third-party tools like Waystar have advertised their “propensity to pay” tool as a way to help hospitals “better assess when and what a patient can fairly be expected to pay, preparing them to more ably guide patients to fully paying off bills.” They even tout the ability for their software to “automatically configure the frequency, volume, and channels of outreach most likely to be effective with patients based on their past behavior” to “illustrate the most effective path to guide patients to completely paying their bills.”

“Without regulatory oversight of the use of AI and predictive analytics in charity care, we risk worsening the problem of medical debt in this country.”

-Christopher Goodman, MD, and Kelsey Chalmers, PhD, in JAMA Internal Medicine

Even if hospitals are using tools designed to make patients eligible for assistance, the procedures around these tools can make them more or less effective. The authors point out that some hospitals indicate in their financial assistance policies that they use AI tools only after the patient has already been billed, which is much less helpful than if they automatically screen patients when they arrive at the hospital. 

The use of AI tools also makes the financial assistance process more opaque and removed from those making these decisions. “In the past, when we had questions about a particular patient’s pending application or rejection, we could actually talk to someone at the hospital and get information,” said Goodman. “Now with the hospital using external companies and predictive analytics, it is harder to get clear answers.”

Regulating AI in financial assistance

As policymakers look to regulate AI in areas like medical devices and clinical algorithms, they should also consider guardrails on how hospitals use third-party tools for revenue cycle management. At the very least, hospitals should be transparent about whether or not they use predictive algorithms related to patient finances, the names of third-party vendors used, and the data sources used in these algorithms (e.g. credit score, previous bills paid, etc). 

In an accompanying JAMA Internal Medicine editorial, Maanasa Kona, JD, Assistant research professor at the Georgetown University Health Policy Institute’s Center on Health Insurance Reforms, also recommends that hospitals be required to “tailor their tools with the goal of ensuring that everyone eligible for financial assistance receives it and is protected from collection actions” and have hospitals “attest that the tools being used are in compliance with all applicable federal and state laws.”

However, as Goodman and Chalmers write, “the more fundamental problem is the lack of national standards for charity care programs.” In most states, hospitals are free to define their own standards for financial assistance eligibility, and may add barriers to access such as asset tests, residency requirements, and lengthy application processes. Using AI tools that put patient information through a black box algorithm is a natural next step in a system that already lacks transparency and consistency. 

Tackling the problem of AI and medical debt will require creating clearer policies around financial assistance in general, including standards for eligibility and care discounts, standard single-page applications to make it easier to apply, and presumptive eligibility requirements to ensure that all eligible patients have access to assistance.