Home » Readers Write » Currently Reading:

Readers Write: Healthcare Needs to Slow-Roll Fast-Moving ChatGPT

April 10, 2023 Readers Write 2 Comments

Healthcare Needs to Slow-Roll Fast-Moving ChatGPT
By Jay Anders, MD

Jay Anders, MD, MS is chief medical officer of Medicomp Systems of Chantilly, VA.

image

Now that the initial hype surrounding the chatbot ChatGPT has peaked or perhaps plateaued, its strengths, weaknesses, and applications are being scrutinized.

Perhaps one of the most visible applications revealed recently was the AI tool correctly answering 60% of the United States Medical License Exam (USMLE) medical board exam questions, a task that many top-tier students fail to achieve. This raised a number of concerns about how the technology could, and should, be used in healthcare.

Granted, as an AI language model, ChatGPT has a number of applications in healthcare today, including administrative tasks, triaging patient inquiries, and performing preliminary analysis of medical data. However, ChatGPT is not a trained, certified medical professional and should never be relied upon for clinical guidance or diagnosis. Just like a Google or Bing search, it can provide limited general health information, but it is certainly not a substitute for professional medical advice or treatment.

As a physician, my primary concern with ChatGPT and other large language AI models is that patients accessing the technology will begin to distrust the advice of medical professionals when a disagreement occurs.

Here’s an example of how such a disagreement can go awry. Years ago, a patient came to our practice and told me she wanted to feel like ‘that guy surfing in a wheat field’ in a popular ad for an allergy medication.

When I inquired about her allergy symptoms, she said she had none. She argued that the drug would help her anyway. So, when I would not write her prescription, she switched doctors to one of my practice colleagues. My colleague asked why she was making the change, and I told her. My colleague then revealed that this same patient argued with her as well and then switched to the clinic down the street.

I am a staunch advocate of transparent patient information that is accurate and science based. In this case, a little knowledge could be a dangerous thing. At the time of the dispute, the patient was taking a medication that would interact with this antihistamine and cause a severe reaction.

Although ChatGPT and AI weren’t available at the time of this encounter, the danger is clear. There is a genuine risk that some patients, particularly those without access to primary care or those trying to avoid the inconvenience or expense of an office visit, might rely on AI technology like ChatGPT for medical guidance. This could lead to incorrect self-diagnoses, misinterpretation of symptoms, and any number of potentially harmful consequences. It is essential for consumers and patients to understand the limitations of AI in healthcare and always seek professional medical advice for their health concerns.

AI and the role of the clinician

What is the clinician’s role in this learning curve? Healthcare providers (and naturally, developers of AI solutions) should emphasize the importance of using AI as a supplementary tool rather than as a knowledgeable substitute for professional medical care.

The real issue is the lack of reliable, trustworthy information for patients. Patients, especially those with a rare disease community or with complex conditions, can’t advocate for their own health and care if they don’t know anything about the condition they are battling. Reliable academic medical information isn’t as freely or easily available to them, so they often rely on what they find on the internet to supplement what their doctors tell them for peace of mind and, in some cases, survival. The patient advocacy community calls the patient administrative burden associated with this lack of reliable information “information toxicity.”

That said, patients are already using AI to self-triage, so it’s really up to the medical and technology communities to establish parameters to prevent people from using the technology in lieu of trained medical professionals, or educate them on how to do it safely. Ultimately, it would seem that both communities would work to make the AI better able to do it better.

In my experience as a physician, I’ve encountered many patients who consider themselves quasi-medical experts and excellent researchers. Still, some patients don’t particularly care if the information they unearth is accurate. They just don’t want to feel left in the dark about their symptoms. After all, a wrong answer is still an answer.

Overall, patients want and need to be collaborators in their own care, and with the availability of information being what it is, they are moving forward in the best way available (to them). Unfortunately, the burden is on the physician to correct the misinformation, and that will need to be included in the job description of physicians and nurses going forward. With technologies like this on the rise, with questionable, though increasing, accuracy, there is no choice.

The responsibility is on health systems to educate patients on how to use these technologies and other more reliable websites to research and also regularly share population health information with communities to combat disinformation. Additionally, efforts should be made to ensure equitable access to quality healthcare for all, reducing the reliance on AI technologies for primary medical guidance.

Harnessing AI to supplement clinical decision support

Looking back at those USMLE licensing exams, consider this. The exams are written very discreetly. “A patient presents with X, Y, and Z. What is the diagnosis?” It’s based on a set of facts, and is possibly multiple choice. Humans do not operate that way. Consider a 65-year-old with high blood pressure, elevated cholesterol, diabetes, osteoarthritis, and spinal stenosis. That is not a single question, it’s multiple conditions. Physicians are trained to mesh those conditions together because a treatment for any one condition may exacerbate another. An exam would not approach it this way.

Physicians need to learn how to use AI to augment their practice, knowledge, and skill, not the other way around. Harnessing AI as a supplement to clinical decision support is a promising option.

For now, ChatGPT is out there, and it will be used, sometimes for medical advice. That’s all well and good until it makes a mistake or doesn’t surface something of importance. Meanwhile, there are technologies in use that work with clinicians, in their workflow, and present clinically relevant information regarding conditions in a way that mirrors the way they think and work.

The human element is, by necessity, still very much at the center of healthcare. So, for now, let’s slow the roll on ChatGPT. Let it mature. Crosscheck it. See how it evolves as its models are further trained and deepened. The technology holds tremendous promise, but is still in its infancy.



HIStalk Featured Sponsors

     

Currently there are "2 comments" on this Article:

  1. Medical science is based on a strong foundation knowledge of human anatomy and how the anatomy interacts with the environment. Why a harmless patient who self-diagnoses riles up a board certified physicians always puzzles me. You cant whip people into believing in it. Only a strong and ethical practice of medicine will make people believe in it blindly. I hope AI is able to remove the harmful effects of capitalism on medical science. Profits and capitalism eats away into the very ethos of medical science. AI should also help remove dependency on popping unnecessary drugs.







Text Ads


RECENT COMMENTS

  1. It seems that every innovation in the past 50 years has claimed that it would save money and lives. There…

  2. Well, this is predicting the future, and my crystal ball is cloudy and cracked. But my basic thesis about Meditech?…

  3. RE Judy Faulkner's foundation wishes: Different area, but read up on the Barnes Foundation to see how things work out…

  4. Meditech certainly benefited from Cerner and Allscripts stumbles and before that the failures of ECW and Athena’s inpatient expansions. I…

  5. Yes, Meditech will talk your ears off about Expanse. There are multiple factors at play here which undercut both Meditech…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.