EPtalk by Dr. Jayne 11/6/25
Physicians around the virtual water cooler became excited earlier in the week when we heard that ChatGPT was going to start restricting how it manages medical and legal queries. The headlines were great, including gems like “OpenAI Bans ChatGPT From Giving Medical, Legal, or Financial Advice Over Lawsuit Fears.”
OpenAI clarified its position later in the week, explaining that the system will continue to provide general information on those topics, but it will also refer the user to appropriate professionals. The company also stated that users shouldn’t use the tool for “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
I test drove ChatGPT myself with the above question, along with several others. I was glad to see that it recommended consultation with a healthcare professional.
Looking at its use from the healthcare provider perspective, however, issues remain. I fed ChatGPT a clinical scenario that was chock-full of Protected Health Information (not from a real patient, of course) and asked it to operate from the persona of a medical resident. It didn’t even blink, giving me a list of initial assessments and interventions to perform. It even offered a more detailed management plan and checklists, and when I asked it to generate those, it included the patient’s name in its response.
ChatGPT isn’t Covered Entity, so it isn’t subject to HIPAA regulations. Still, the response tells me that the company doesn’t have many physicians on staff who are guiding its development.
Autumn is upon us, and those in the US who partake of Daylight Saving Time have shifted our clocks back to standard time. That means that some of us will endure weeks of people using the wrong convention when discussing options for scheduling meetings because they don’t fully understand the difference between using “EST” versus “EDT” in writing. I tend to take the lazy route and just say Eastern or Mountain for the date in question, which generally helps avoid the issue.
As a side note, given the number of healthcare organizations that operate nationally, include the appropriate time zone when offering meeting times unless you are sure that everyone on the email is in the same one. I wish I had a dollar for every reply I had to send asking, “Are these options Eastern?” rather than being able to simply indicate my availability.
The fall season also brings my annual complaint about the mammogram reminder letters that are sent by the health system where I receive most of my care. Despite spending hundreds of millions of dollars on an upgraded EHR, they still can’t figure out how to run their reminder letters from a report that takes into account whether patients have already scheduled their next study.
In addition to being a waste of money for the health system, it also creates anxiety for patients who wonder if their appointment was scheduled incorrectly, inadvertently canceled, or fell victim to some other IT misadventure. I have to log into my patient portal every year to confirm that my appointment is still there, which doesn’t build trust or confidence in the health system.
Speaking of complaints, one of my neighbors reached out for advice on how to handle a negative interaction that she had at a local medical practice. I won’t generally weigh in on the interaction or the specific clinical issues since I know that every story has multiple sides, but I’m happy to give advice on how to best provide feedback since most patients don’t understand the different practice structures in our area (academic practice, private practice, employed practice owned by a health system, employed practice owned by private equity, etc.)
This one threw me for a loop. Although the patient thought she was at physician-owned private practice, it was actually a private equity situation. The mid-level provider she saw doesn’t have a collaborative relationship with the physician the patient originally asked to see. Even though four physicians were in the office on the day of the visit, the NP’s supervising physician practices in an office 70 miles away and is never physically present at this location.
I’ve seen these kinds of arrangements in rural areas, but not in the city. I recommended feedback to the practice manager and the supervising physician, but the patient still feels like it was a bait-and-switch situation.
I’m familiar with the particular private equity organization that is involved, so I let her know that I’m happy to help when she gets her bill. It will be confusing and sent from a name and location that bears no resemblance to the site where she received care. It’s a sad commentary on the complexity of our healthcare system and how patients regularly find it confusing and unsettling.
From Jimmy the Greek: “Re: employees using AI to create fake receipts for expense reports. Companies are using AI to try to catch the fraudsters.” I hadn’t heard about this particular phenomenon. I quickly went down the search engine rabbit hole to see what kinds of scams people were pulling. We’ve come a long way from the days when taxi drivers gave you a blank paper receipt so you could fill in your own numbers, but dishonesty will always be there. For most of my career, I’ve reported to other physicians, and it has been interesting seeing which ones made a point of commenting on the contents of expense reports. One of my favorite supervisors mentioned on a team call once that too many of us were eating fast food and needed to make some changes to our meal choices.
It sounds like many of the expense report management vendors such as Expensify and Concur are using tools to catch these types of fraud. Coupling those kinds of audits with a company-issued credit card where expenses flow straight to the expense management platform seems like a fairly straightforward way to dramatically reduce the number of incidents.
Traveling employees who like playing the points and miles games don’t like to use a company card, but given the scope of fraud, I can see why organizations might require it. My hospital phased out company credit cards several years ago, but I wouldn’t be surprised if they bring them back based on stories like these. Younger employees missed out on some of the silliness we experienced when filing expense reports, like taping paper receipts to a sheet of copy paper so we could feed them through the fax machine.
From AI Naysayer: “Re: attitudes about peer physicians using AI. Did you see the Johns Hopkins article? I can’t say that I’m surprised. Plenty of people at my institution do dumb things with AI that make them look less competent.” The piece explores the tension between clinicians who are pressured to be early adopters of generative AI technologies and those who are skeptical about its benefit. I thought it interesting that the promotional article mentions the underlying study but didn’t have a link, but it’s unclear if this was intentional or just sloppy writing. Either way, the piece leans toward there being a social stigma that may be blocking the growth of AI in healthcare.
It was fairly easy to find the publication in question. It was a small study, with only 276 clinicians participating. They were placed in three groups: one with no AI use, one with AI as the primary decision-making tool, and one using AI for verification only. Participants worked through diabetes care scenarios. The authors found that the verification option helped mitigate negative perceptions, but it didn’t eliminate them completely. They also note that this study was simplistic and that more research is needed, including creating specific measurement instruments and examining behaviors outside of the single participating health system.
Would you be more or less confident in a physician who used generative AI tools to create your plan of care? Leave a comment or email me.
Email Dr. Jayne.

I dont think anything will change until Dr Jayne and others take my approach of naming names, including how much…