Can Generative AI Improve Health Care Relationships?

0
61
Advertisement

By MIKE MAGEE

“What exactly does it mean to augment clinical judgement…?”

That’s the question that Stanford Law professor, Michelle Mello, asked in the second paragraph of a May, 2023 article in JAMA exploring the medical legal boundaries of large language model (LLM) generative AI.

This cogent question triggered unease among the nation’s academic and clinical medical leaders who live in constant fear of being financially (and more important, psychically) assaulted for harming patients who have entrusted themselves to their care.

That prescient article came out just one month before news leaked about a revolutionary new generative AI offering from Google called Genesis. And that lit a fire.

Mark Minevich, a “highly regarded and trusted Digital Cognitive Strategist,” writing in a December issue of  Forbes, was knee deep in the issue writing, “Hailed as a potential game-changer across industries, Gemini combines data types like never before to unlock new possibilities in machine learning… Its multimodal nature builds on, yet goes far beyond, predecessors like GPT-3.5 and GPT-4 in its ability to understand our complex world dynamically.”

Health professionals have been negotiating this space (information exchange with their patients) for roughly a half century now. Health consumerism emerged as a force in the late seventies. Within a decade, the patient-physician relationship was rapidly evolving, not just in the United States, but across most democratic societies.

That previous “doctor says – patient does” relationship moved rapidly toward a mutual partnership fueled by health information empowerment. The best patient was now an educated patient. Paternalism must give way to partnership. Teams over individuals, and mutual decision making. Emancipation led to empowerment, which meant information engagement.

In the early days of information exchange, patients literally would appear with clippings from magazines and newspapers (and occasionally the National Inquirer) and present them to their doctors with the open ended question, “What do you think of this?”

But by 2006, when I presented a mega trend analysis to the AMA President’s Forum, the transformative power of the Internet, a globally distributed information system with extraordinary reach and penetration armed now with the capacity to encourage and facilitate personalized research, was fully evident.

Coincident with these new emerging technologies, long hospital length of stays (and with them in-house specialty consults with chart summary reports) were now infrequently-used methods of medical staff continuous education. Instead, “reputable clinical practice guidelines represented evidence-based practice” and these were incorporated into a vast array of “physician-assist” products making smart phones indispensable to the day-to-day provision of care.

At the same time, a several decade struggle to define policy around patient privacy and fund the development of medical records ensued, eventually spawning bureaucratic HIPPA regulations in its wake.

The emergence of generative AI, and new products like Genesis, whose endpoints are remarkably unclear and disputed even among the specialized coding engineers who are unleashing the force, have created a reality where (at best) health professionals are struggling just to keep up with their most motivated (and often mostly complexly ill) patients. Needless to say, the Covid based health crisis and human isolation it provoked, have only made matters worse.

Like clinical practice guidelines, ChatGPT is already finding its “day in court.”  Lawyers for both the prosecution and defense will ask, “whether a reasonable physician would have followed (or departed from the guideline in the circumstances, and about the reliability of the guideline” – whether it exists on paper or smart phone, and whether generated by ChatGPT or Genesis.

Large language models (LLMs), like humans, do make mistakes. These factually incorrect offerings have charmingly been labeled “hallucinations.” But in reality, for health professionals they can feel like an “LSD trip gone bad.” This is because the information is derived from a range of opaque sources, currently non-transparent, with high variability in accuracy.

This is quite different from a physician directed standard Google search where the professional is opening only trusted sources. Instead, Genesis might be equally weighing a NEJM source with the modern day version of the National Inquirer. Generative AI outputs also have been shown to vary depending on day and syntax of the language inquiry.

Supporters of these new technologic applications admit that these tools are currently problematic but expect machine-driven improvement in generative AI to be rapid. They also have the ability to be tailored for individual patients in decision-support and diagnostic settings, and offer real time treatment advice. Finally, they self-updated information in real time, eliminating the troubling lags that accompanied original treatment guidelines.

One thing that is certain is that the field is attracting outsized funding. Experts like Mello predict that specialized applications will flourish. As she writes, “The problem of nontransparent and indiscriminate information sourcing is tractable, and market innovations are already emerging as companies develop LLM products specifically for clinical settings. These models focus on narrower tasks than systems like ChatGPT, making validation easier to perform. Specialized systems can vet LLM outputs against source articles for hallucination, train on electronic health records, or integrate traditional elements of clinical decision support software.”

One serious question remains. In the six-country study I conducted in 2002 (which has yet to be repeated), patients and physicians agreed that the patient-physician relationship was three things – compassion, understanding, and partnership. LLM generative AI products would clearly appear to have a role in informing the last two components. What their impact will be on compassion, which has generally been associated with face to face and flesh to flesh contact, remains to be seen.

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside America’s Medical Industrial Complex (Grove/2020).

Credits: thehealthcareblog

Advertisement

LEAVE A REPLY

Please enter your comment!
Please enter your name here