All fair points - I think the core issue I’m trying to articulate is really about how markets behave over…
Curbside Consult with Dr. Jayne 12/15/25
A friend reached out this weekend to ask my opinion about the risks of plugging medical information into ChatGPT and other publicly available AI tools. She wanted to know if I agree with a recent New York Times article about it.
My first concern is with the accuracy of the medical information that is being fed in. My own records have contained a variety of misinformation in the last several years, including documented findings from exams that didn’t occur, incorrect diagnoses, and at least one document that was scanned into the wrong chart.
Smaller errors also occurred, such as inaccuracies in dictation / transcription that weren’t caught in editing. Although they don’t materially change the content of the record, I wouldn’t want them taken out of context.
The article starts with a scenario where a patient receives abnormal test results. She is “too scared to wait to talk to her doctor,” so she pastes the lab report into ChatGPT. It tells her that she might have a pituitary tumor.
This is a prime example of the unintended consequences of giving patients access to their lab results before the ordering physician reviews them. It’s the law, and patients have a right to their information, but it can be harmful to patients in some circumstances. I’m glad to see care delivery organizations giving patients the choice of receiving their results before or after they are interpreted by the care team.
Another scenario involved a patient uploading a half-decade of medical records and asking questions about his current care plan. ChatGPT recommended that the patient ask his physician for a cardiac catheterization.
The procedure was performed and the patient did have a significant blockage. However, it’s difficult to know what the outcome might have been had the original care plan been followed. The write-up of the scenario didn’t include any discussion of how things went when the patient pushed for a procedure, or if other ramifications, such as insurance issues, resulted from the pursuit of a higher level of intervention.
Most of the patients I see don’t fully understand HIPAA. They think that any kind of medical information is somehow magically protected. They don’t know what a covered entity is in the role of protecting information. They give away tons of personal health information daily through fitness trackers and other apps without knowing how that information is used or where it goes.
I personally wouldn’t want to give my entire record to a third party by uploading it to an AI tool. I don’t know how the tool handles de-identification and I’m not about to spend hours reading a detailed Terms and Conditions or End User Licensing Agreement. Based on the number of people who share their information in this way, it’s clear that many aren’t worried about the risks.
One of the professors who was interviewed for the article noted that patients shouldn’t assume that the AI tool personalizes its output based on their uploaded detailed health information. Patients might not be sophisticated enough to create a prompt that would force the model to use that information specifically, or might not be aware of instructions within the model to handle that kind of information in a certain way.
Assuming that you will receive a response that is tailored specifically to you can be challenging, especially since much of the medical literature looks at how disease processes occur across populations rather than for an individual.
The comments on the article are interesting. One cautioned users to consider using multiple models, asking the same questions, and having the models evaluate each other in order to make sure the output is valid. I can’t see the average patient spending the time to do that.
Others talked about how they’ve used ChatGPT to drive their own care. One commenter mentioned that she also used it to research care for her pet and to make adjustments to the regimen prescribed by her veterinarian.
Concerns were also expressed about the possibility for bias and advertisements to creep in, especially with the discussion of particular medications that are still under patent.
Several readers shared stories about AI tools giving wildly inappropriate care recommendations that could have been harmful if patients hadn’t done additional research on the suggestions. One specifically mentioned the AI’s “mellow, authoritative reassurance of the answers, in a tone not different from talking to a trusted and smart doctor friend” despite being “flat wrong on several points.”
Another reader mentioned that tools like ChatGPT formulate their answers from materials that they find online. Unless you specifically ask for citations, it’s difficult to know whether the information is coming from a medical journal or an association dedicated to patients with a specific condition. Or, was simply made up.
Readers also called for certification of models that are being used for medical advice. One noted, “My doctor had to get a degree and be licensed. If he messes up bad enough, he can lose that license. There should be procedures for evaluating the quality of chatbot medical advice and for providing accountability for mistakes. Medical conversations with them aren’t like chatting with your neighbor about your problems.”
I hadn’t thought about it that way. It’s a useful idea that I may use when talking to patients who have been using the tools. The information they receive may or may not be better than what they would get over the fence from a neighbor, but it’s difficult to know.
One comment noted that since physicians are using these tools to do their jobs, it’s only fair that the patients have access as well. A follow-up comment noted that the writer “walked in on new residents Googling a patient’s symptoms.”
It makes one wonder how these tools will impact graduate medical education. Is the next generation of physicians building their internal knowledge and recall skills in the same way as previous generations? If they’re not, it’s going to be a rude shock the first time they have to live through a significant downtime or outage event.
It will also be interesting to see board exam pass rates change for physicians who trained in the post-AI era compared to those of us who didn’t have access to those tools.
What do you think about patients feeding their medical information into LLMs? Providers, under what circumstances would you recommend it? Leave a comment or email me.
Email Dr. Jayne.

How do you even know what to believe, LLM or not? Are vaccines still good? No one seems to have a definitive answer these days
While I’m a big Dr J fan, I think she’s heading a bit into medical paternalism here. We heard very similar things about patients using the internet from the 1990s onwards but they were always ahead of their doctors and a better informed patient is a healthier patient. Pretty quickly patients self organized and started helping each other. Now of course this doesn’t mean there isn’t disinformation out there, much of it (ahem) led by doctors who should know better but in general LLMs are incredibly helpful and mostly very accurate. Yes they require a little skill and commonsense to use but I’ve used several for my own medical condition and had everything verified by my GP and 2 specialists.
And of course this piece assumes both access to doctors and responsiveness. In my case I received an imaging report in MyChart and put it into ChatGPT, Inciteful Med and Claude that day. (I also messaged some specialists I know who confirmed what the LLM told me.
I am still unclear what the FORMAL response to my results would have been. I never heard from anyone at the imaging center or the related specialty group after my image was taken–other than getting the report in MyChart. I only heard from my GP after I sent him the summary of the report, and was only able to get an appointment with him several MONTHS later.
Now this wasn’t an urgent case (although the condition is serious enough to need follow up), so maybe the response would have been different if it had been. But consider that many do not have access at all in this or many other health care systems, and that access to the LLMS and other knowledge bases delivers immediate and cheap interpretation. Yes, there’s a chance it may be wrong and may be biased. But that’s also the case for the health care system as a whole.
Net net they are a huge plus for patients and getting better all the time.