Home » Dr. Jayne » Currently Reading:

Curbside Consult with Dr. Jayne 12/2/24

December 2, 2024 Dr. Jayne 3 Comments

clip_image002 

This weekend marked the two-year anniversary of the debut of ChatGPT. This seems to be a good time to reflect on where generative AI has taken us during that interval.

When it initially launched, there were quite a few worries about AI becoming sentient and taking over the world, but it seems that we’ve been spared that. I’m not sure you’re capable of taking over the world when you can’t generate pictures of humans that have the correct number of digits per hand, so maybe we can use that as a benchmark for how worried we should be about generative AI coming after us.

Although ChatGPT is the original, there are plenty of competitors in the market. The majority of physicians I encounter cite using ChatGPT, Microsoft Copilot, Google Gemini, Meta Llama, or Perplexity. The last one has been on the rise when I ask my colleagues around the virtual water cooler, although I personally think that it’s the least capable based on my short list of medical searches that I use to kick the tires on the models over time. The last time I tested Perplexity, it gave me a clinical recommendation that was 180 degrees from standard care for a patient with a particular genetic variant, which if followed would likely have led to negative outcomes (such as preventable death).

Healthcare organizations see the risk that AI can bring to our environment are joining together to provide guidelines for development and use of AI in healthcare in a responsible manner. The Coalition for Health AI (CHAI) is looking at safe and equitable implementation of AI in healthcare and has information on model evaluation and standards on its website. Google, Microsoft, and Amazon are among the founding members, as are care delivery organizations, academic centers, professional organizations, retailers, payers, and standards organizations. The Coalition recently hosted CHAI on the Hill Day to educate lawmakers on healthcare AI, although it sounds like the event was heavy on developers and industry folks and light on care delivery organizations.

Care delivery organizations are also doing their own deep dives into AI, including Mass General Brigham, which recently announced its Healthcare AI Challenge Collaborative. Additional members include Emory Healthcare, the University of Wisconsin School of Medicine and Public Health, the American College of Radiology, and the University of Washington School of Medicine. Researchers will have access to an environment that includes AI solutions to “assess for effectiveness on specific medical tasks, such as providing medical image interpretation, in a simulated environment.” Users can provide feedback and the Collaborative is planning to use a crowdsource methodology for healthcare professionals “to create continuous, consistent and reliable expert evaluations of AI solutions in medicine.”

The Challenge will look first at radiology-related use of AI, which makes sense given that AI has been used in varying degrees in that field for years. It’s important to understand that fact, especially given the scare factor behind the use of the AI label since the emergence of ChatGPT. In my conversations, I find that people don’t really understand that there are different types of AI, many of which have been in use for a long time across a variety of industries. It’s only generative AI that is relatively new to the dance, but it has unfortunately triggered the creation of AI policies and AI review committees that have the chance to become cumbersome if they can’t differentiate between established low-risk AI solutions and higher-risk generative ones.

When I have this conversation with people, I point out the kinds of AI that we’ve all grown to depend on as examples of why not all AI is bad. These include spam filters, fraud detection and identification of suspicious transactions, sales forecasting, behavior analysis, and predictive models for a variety of things, including public health.

In my workplace travels, I’ve seen some of those go awry. One organization that I was consulting for had their email spam filter dialed up so high that anything with an outside address immediately went to junk mail with no way to add to a safe senders list. I asked for an in-house email address so that I could work effectively, and it took more than a month to get that provisioned. That kind of inertia didn’t make for a productive consulting environment, so my work with them was short lived.

Other health systems have jumped into creating AI centers to test and develop tools. New York’s Mount Sinai has opened the Hamilton and Amabel James Center for Artificial Intelligence and Human Health, which focuses on patient care such as diagnosis and treatment. Vanderbilt University Medical Center is creating the AI Discovery and Vigilance to Accelerate Innovation and Clinical Excellence center. That’s definitely a mouthful and doesn’t appear to be any kind of acronym or initialism, so I wonder if the name will be whittled down to something punchier. Hartford HealthCare is creating a Center for AI Innovation in Healthcare that includes research, development, education, training, ethical, and regulatory aspects of AI.

As a clinician, the biggest risk I see of AI in healthcare is for frontline clinicians who don’t have a background in clinical informatics or an understanding of the potential pitfalls of generative AI. These folks have a high likelihood to use non-medical AI solutions for clinical care support even though those solutions have plenty of disclaimers that say they shouldn’t be doing it. Get a group of physicians together and they’ll talk openly about what they’re using and how they’re using it, and there’s often little realization of the risks.

These physicians are the same ones who are also likely to not proofread their AI-generated notes before they go out, but then again, they’re also the ones who didn’t read their dictated notes either. They also have a high likelihood of using templated notes and not updating them consistently for the patient in front of them, so it all goes to a pattern of behavior. Still, there are too many clinicians taking this cavalier attitude for me to be comfortable with their ability to effectively and safely incorporate additional AI solutions into patient care.

I’m not worried about AI taking over my life, at least in the short term. No online presence is going to come into my house and create delightful baked goods such as the dinner rolls that I crafted for Thanksgiving. I appreciate that AI can make some tasks faster so that I have time for things like baking and creating, but there is still plenty of busy work that I’d like to offload to AI sooner than later, and I wish developers would get to work on that.

What are your favorite holiday foods? Care to share a recipe? Leave a comment or email me.

Email Dr. Jayne.



HIStalk Featured Sponsors

     

Currently there are "3 comments" on this Article:

  1. I do wonder about possible issues with insurers basing their decisions to approve medical care if it’s based on erroneous AI documentation. I can see that happening…🙄

    • Bit of a stretch ignoring the I in innovation just to reverse engineer the predetermined acronym into words. Replace “to Accelerate” with “for” and it could have been ADVICE.







Text Ads


RECENT COMMENTS

  1. Does a proliferation of explicit Nazis count as "political" for you? And HIT Girl is dead-on about misogyny, exemplified by…

  2. If you're a woman on Twitter you have to deal with horrific violently-ideated misogynistic abuse, which seems like as good…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.