Please ensure Javascript is enabled for purposes of website accessibility

Medical insurance companies shouldn’t use AI to deny patients’ claims

Michael Dunn, Guest Commentary//March 14, 2025//

(Deposit Photos/Red Pixel)

Medical insurance companies shouldn’t use AI to deny patients’ claims

Michael Dunn, Guest Commentary//March 14, 2025//

Medical insurance is under a microscope nationally, but most aren’t aware that insurers use Artificial Intelligence to review and deny patients’ claims. This is causing major, sometimes life-threatening, complications in healthcare, but local legislation could offer patients protection.

AI continues to evolve as a powerful tool that can make processes more efficient, reduce costs and simplify tasks across many industries, but when it comes to healthcare, we need to be especially careful. AI’s growing role in insurance claim denials and prior authorizations poses a serious risk to patient safety and trust in a system that should be there for people in their most vulnerable moments.

That’s where HB2175 comes in. This legislation ensures that medical decisions aren’t left hidden inside an AI “black box,” where patients and doctors have no insight into how or why care is denied. Instead, it guarantees that cases involving medical judgment are reviewed by licensed medical professionals who have the training, experience and ethical responsibility to make those calls.

Michael Dunn
Michael Dunn

Insurance denials aren’t new — just ask any patient who has struggled to get the care they need. But what’s changing is how these denials happen. I recently reviewed five downcoded claims from a major workers’ compensation insurance carrier. “Downcoded” means they were reimbursed at a lower rate than what was billed. It was clear these decisions were made by AI. After carefully reviewing them, I found that while two could have gone either way, three were definitely correct and should never have been downcoded. We appealed those claims, but they’re still in limbo. This is the frustrating reality doctors and patients face: an algorithm makes an arbitrary decision, and even when it’s clearly wrong, the process to correct it is slow, difficult and expensive to our practice.

Insurance is supposed to be a safety net, a promise that when medical care is necessary, patients won’t be left facing enormous bills alone. But increasingly, AI-driven systems are making these decisions without the insight or compassion that human judgment provides. Instead of trained professionals weighing the specifics of each case, algorithms determine what gets approved and what doesn’t — often without enough transparency or accountability.

HB2175, championed by Rep. Julie Willoughby, is crucial because AI models aren’t perfect, and they’re only as good as the data they’re trained on. If past data includes biases — whether from demographics, historical claim patterns or the complexity of real-life medical cases — AI will carry those biases forward, potentially putting some patients at an unfair disadvantage. Even more concerning, these systems are designed with one primary goal: cutting costs. When reducing spending takes priority over patient care, the result is more delays, more denials and worse outcomes.

Perhaps the biggest concern is how AI removes the human element from medical decision-making. Algorithms can process vast amounts of data, but they can’t understand the nuances of an individual’s health or the judgment calls that doctors make every day. I recently had a conversation with a national physician leader about this issue. He told me the only way to counter AI-driven claim denials is to preemptively scrub our own billing with AI before we send it out. It struck me how absurd this is — we’re essentially in a battle to see who has the best software to get claims through the system. Instead of doctors spending time on patient care, we’re being forced into a digital arms race against insurance algorithms.

That doesn’t mean AI has no place in healthcare — it absolutely does! AI can be a valuable tool for streamlining administrative tasks, improving efficiency and even assisting with decision-making. However, when it comes to determining whether a patient receives care, human oversight is non-negotiable.

Arizona lawmakers have a real opportunity to protect patients by ensuring AI is used ethically in insurance decisions. Technology should work for people, not against them. As AI continues to shape our world, we must remember that no algorithm can replace the experience, empathy, and responsibility of a trained medical professional. The stakes are simply too high.

Dr. Michael Dunn is a family medicine physician with over 27 years of medical experience serving patients in the East Valley. He can be reached at flyingdoc1@yahoo.com.

Subscribe

Get our free e-alerts & breaking news notifications!

You don't have credit card details available. You will be redirected to update payment method page. Click OK to continue.