A Patient’s Perspective on the Future of Medical AI (Part 1)

Ben Lengerich
4 min readOct 26, 2024

--

Part I: A Patient’s Lament

As a professor focused on AI methods in healthcare, I’ve spent years developing models designed to predict and improve clinical outcomes. Yet, despite my professional expertise, I recently found myself navigating the uncertainties of the healthcare system from a different perspective: as a patient’s advocate.

My wife was hospitalized with unexplained complications after pregnancy, and for weeks we watched as doctors reassured us that her blood pressure, sodium levels, and liver function were all “in range.” Despite this, the complications persisted, and each day felt like we were grasping at straws.

What struck me most was that her care seemed driven by what I began calling “vibes-based medicine” — decisions based on gut feelings and unexplainable combinations of factors, rather than interpretable data-driven insights. This experience made me reconsider the role of interpretability in AI. Previously, I viewed interpretability as essential for biomedical research because understanding causation is a prerequisite to meaningful interventions. But now, seeing how AI will inevitably surpass human capabilities, I began asking myself: How do I envision an AI-based medical system working?

As we sat in the hospital waiting for answers, I couldn’t help but wonder: What role should AI play in advancing medical practice? Will AI help bridge the gap between data and decision-making, or will it introduce a new kind of opacity into an already opaque system?

Handcuffs of current practice: Univariate analyses lead to thresholds and reactive decision-making

In today’s healthcare system, clinicians often rely on univariate thinking — where each health metric is considered independently, as though health can be decomposed into independent components, with each component measured by a single number. This approach makes sense given the vast amount of information clinicians must process, but univariate thinking leads to threshold-based medicine, where treatment decisions are tied to specific cutoffs: a blood pressure reading, a glucose level, or an electrolyte measurement.

This univariate focus overlooks latent components of health that could be better measured by interactions of variables. For example, if high serum sodium is attributed to dehydration, can chloride levels be simultaneously low? Or would an observation of high sodium and low chloride point to a different explanation, one not accounted for by dehydration alone? Such interactions reveal crucial insights that univariate thinking often misses.

During my wife’s hospitalization, I saw firsthand how this approach left us feeling uncertain. Her test results all stayed within “normal” ranges, but the overall trend suggested something was wrong. Numbers were moving in different directions, and the explanations we received didn’t add up. By focusing on isolated metrics, the medical team missed the bigger picture — how those metrics interacted and what they signaled about her overall condition.

Doctors as Black Boxes: Vibes-Based Medicine

As a consequence of this disconnect between metrics and overall condition; the decision-making process turned into a black box. As patients, we were presented with two explanations for my wife’s health: (1) univariate, threshold-based rules for individual metrics, and (2) a broader, holistic diagnosis or prognosis provided by the doctor. However, the connection between these two perspectives was unclear, leaving us to wonder how the holistic assessment (2) emerged from the individual data points (1). This gap made the decision-making process a black box.

I began calling this “vibes-based medicine” — where decisions had no clear attribution to any specific data point. This was the same type of black box I recognize in many AI systems. Unfortunately, in healthcare, this opaque process prevents patients and their families from contributing new information to the diagnostic process — if you don’t know what information is important, how can you condense and effectively communicate your life experience? The result isn’t just frustrating, but also inefficient, as patients and their advocates, who often possess key information, are unable to meaningfully contribute.

Fixable Crises: Treating the patient or treating the threshold?

Finally, threshold-based medicine is inherently reactive, focusing on isolated numbers until they cross a threshold and become a crisis. It tolerates slow, creeping issues as long as each isolated metric seems fixable, preferring to delay action until there is a definable problem of a measurement being “abnormal”. This reactive approach — whether by AI models or human clinicians — allows “fixable crises,” such as waiting for kidney failure and then treating it with dialysis, rather than preventing it in the first place. This focus on managing the acute rather than preventing the chronic keeps healthcare in a reactive mode.

From Vibes-Based Medicine to a Proactive System

This experience led me to reflect on the role of AI in transforming healthcare. Does AI have to follow the same path of reactive, threshold-based care? Or can we build AI systems that promote a proactive approach — one that values trends over individual metrics, multivariate analysis over univariate thinking, and transparency over opacity?

The future of medical AI shouldn’t just be about providing more data. It should be about making sense of that data in a way that includes patients and their advocates in the decision-making process. By rethinking how AI systems are designed, we have the opportunity to create a healthcare system that anticipates problems before they become emergencies and works collaboratively with patients to manage their health. This collaborative, patient-centric view is not just an ethical goal, but also a necessity for a system of effective care.

--

--

Ben Lengerich
Ben Lengerich

Written by Ben Lengerich

Asst Prof @ UW-Madison. Writing about AI, ML, Precision Medicine, and Quant Econ.