Site icon Lown Institute

The risks of rushing to incorporate AI into health care

The risks of rushing to incorporate AI into health care

When people think of artificial intelligence (AI), they usually imagine something futuristic, such as robots taking over the world. But AI in health care in many ways is already here: wearable sensors and trackers, virtual diagnostic systems, and other data analysis technologies have shown potential to help doctors diagnose diseases, read scans, and make decisions about treatments.

This exciting potential has made health care AI a fast-growing enterprise. In 2017, Americans invested nearly $100 million in health care AI companies, about three times as much as the amount invested in 2011. This excitement and financial boom has lead some health care leaders to hail AI as a “revolution” in health care. Others, however, are more skeptical. 

A recent piece by Liz Szabo in Scientific American explores both the potential benefits and serious risks of rushing to incorporate AI into patient care. A primary concern is the lack of oversight of AI in health care. While prescription drugs must be approved by the Food and Drug Administration (FDA) before they are marketed and sold, most health care AI does not require FDA approval as long as it is “similar” enough to a product already on the market. (Sound familiar? It’s the same problematic process the FDA uses to approve new medical devices.)

Silicon Valley’s preference for speed over perfection is well known, and without sufficient oversight from the FDA, patients and health systems will be exposed to risks, such as: 

As AI becomes more popular in health care, clinicians should take the opportunity to learn about how these products work and their potential risks–and push back on their unregulated use–for the sake of their patients. 

“While it is the job of entrepreneurs to think big and take risks, it is the job of doctors to protect their patients,” said Dr. Vikas Saini, quoted in Scientific American

Exit mobile version