Home » Dr. Jayne » Recent Articles:

EPtalk by Dr. Jayne 1/29/26

January 29, 2026 Dr. Jayne No Comments

The Journal of the American Medical Association published a research letter this week that looks at how authors are disclosing their use of AI when preparing submissions to professional journals. The JAMA Network has required such disclosures since August 2023. The authors reviewed the data to better understand how AI is being used and disclosed.

Papers in which AI use was declared increased from 1.7% to 6%. Common uses were creating drafts, searching the literature, editing language, developing statistical models, and evaluating data. AI use was more likely in Viewpoints and Letters to the Editor submissions than in Original Investigations.

The paper concludes that without a standard for confirming AI use, it’s difficult to know if authors are underreporting. They add that the results may show a greater need for journals to confirm how authors are using AI and whether it’s appropriate and accurate.

Clinician burnout continues to be a major focus for care delivery and professional organizations. One of the top symptoms that I hear about from colleagues is their inability to disconnect in the digital age. Physicians feel that they need to check their inboxes for patient results and respond to portal messages during off hours to avoid having them piling up.

A new article in the Journal of Medical Systems describes a randomized controlled trial around Reducing Work-Related Screen-Time in Healthcare Workers During Leisure Time (REDUCE SCREEN). Researchers used a straightforward intervention to examine whether a link exists between clinician wellbeing and the use of work-related apps on personal devices. A cohort of 800 physicians, residents, and nurses was divided into a control group and one whose members were instructed to take specific steps to reduce after-hours work, such as using out-of-office notifications and removing work apps from personal devices.

They found that after a scheduled weekend off, those in the intervention group had double the reported reduction in stress compared to those who weren’t instructed to make changes in device use. The intervention group also had an overall reduction in screen time compared to the control group. The study was limited by the fact that one-third of participants failed to complete the post-weekend assessment.

The authors plan additional research to look at interventions that force disengagement from work during non-scheduled hours to see if they are linked not only to less stress, but to improved productivity during working hours.

From Home Care: “Re: AI solutions. My daughter’s college is working on AI solutions that could help individuals with cognitive decline live independently longer. This seems like a much better use of AI than some of the options currently out there.”

The article covers a project that brought computer scientists together with occupational therapists to create an AI assistant to help solve this problem. The team captured videos of patients with and without cognitive decline performing a specific task, then created models to identify cognitive sequencing errors during task completion. The system is cheekily named CHEF (Cognitive Human Error Detection Framework) as it looked at the executive functions needed to prepare oatmeal on a stove.

While a camera captured the subject’s movements, occupational therapy students also provided cues about safety concerns or other errors. The system’s vision-language model integrates videos along with text and images to identify both obvious errors and those that are difficult to detect. The team states, “This is an excellent example of applying the cutting-edge AI to a vital health problem with tremendous public health impact.”

As a family physician who has had many difficult conversations about aging patients who are struggling to remain independent, this is some of the most exciting AI-related work that I’ve seen in recent memory. I hope these types of solutions are a reality by the time I might need them.

image

HIMSS has announced that the keynote speaker for the upcoming meeting will be actor Jeremy Renner. The announcement promises “a thoughtful look at the intersection of determination, care, and innovation and the impact they can have when people come together in moments that matter most.” Those who register before Friday, January 30 have a chance to win an opportunity to meet him personally.

I did something that I haven’t done in a very long time today. I wrote a paper check to pay for a medical bill. The entire process was frustrating. I received a patient portal message that told me that I had a bill, but I wasn’t able to log in. I thought it was an expired password, but I could access the portal from a different link.

It turns out that the practice operates as two separate entities. They use the same EHR, but each practice has its own patient portal. Going back to the portal that I could access, I saw the billing statement with the header for the other entity.

Clicking the payment link took me to a “page not found” error, so I typed the link manually, with the same outcome. I repeated this process the next day, thinking that maybe it was a site outage, and had the same result. 

I called the number on the bill. They told me that they can’t take payments over the phone, so I was off to find the checkbook. If providers want to be paid in a timely manner, they need to make sure that their systems are working to make it easy for patients to pay.

I received two separate mailings from that practice today. The first was a check, which I assume was mailed by their billing service, that refunded me for an overage for the patient co-insurance portion of a procedure that I had last month. The second was a letter from the practice of the physician who performed the procedure featuring red “Second Notice” stickers to remind me that I was overdue to have the procedure and that they would make no further attempts to schedule it. This right here is US healthcare at its finest.

The American Academy of Pediatrics released its own childhood vaccination schedule this week, breaking with the Centers for Disease Control and Prevention on vaccine guidance. States are also issuing their own guidance or joining coalitions to discuss common recommendations.

The EHR where I practice most often continues to display legacy recommendations, and I haven’t heard of any plans to update them. I’m not sure if that’s because the work to do so wasn’t slotted into the IT build budget or if facility leadership is making a statement. Some days it’s refreshing to be outside the circle of decision- making, after having done it for so long.

How is your organization approaching the task of updating vaccine recommendations in your EHR? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 1/26/26

January 26, 2026 Dr. Jayne No Comments

image 

Significant portions of the US are experiencing arctic temperatures and significant snowfall this weekend. As the storm approached my area, I touched base with nursing staff at several hospitals to see how they were ensuring adequate staffing despite deteriorating road conditions.

They generally offered options for staff to sleep on campus, but approached the situation in drastically different ways. One hospital enticed nurses to sleep on campus to guarantee attendance, paid a retention bonus for the time between shifts, and provided meals Another sent a text message that was less than welcoming, treating those who planned to stay at the hospital as a burden by telling them to bring their own bed linens and towels. I’m betting that employee satisfaction differs between those facilities.

image

Speaking of things that didn’t resonate well during the storm, the marketing folks at Starbucks should reconsider their tactics during winter storms. While the National Weather Service was issuing advisories and our city and state public safety officials were urging people to stay off the roads, Starbucks was blowing up my phone with discount drink offers.

It seems like it would be easy to suppress those promotions in area codes where people shouldn’t be on the roads, whether they’re customers or employees. People who have storm-belt area codes might live elsewhere in the US, but I would guess that they are in the minority. Better yet, come up with a promo code that people can enable that becomes active in three or four days, when they start to tunnel out and are looking for a treat. My city is still focusing on clearing interstates and critical roads, so I will be staying put for a while.

We became skilled at pivoting to virtual meetings during the COVID pandemic, so I was surprised to see some meetings cancel off of my schedule even though they could have been held as web meetings or even as old-school conference calls. I could understand this for small organizations that might have let their virtual meeting subscriptions lapse, but these cancellations involve larger organizations that routinely have at least one or two people on video due to travel constraints.

Childcare issues could be at play due to school closures, but one of the only bright spots of the pandemic was getting to virtually meet the families and pets of my co-workers.

In last week’s Healthcare AI News, Mr. H mentioned the growing concerns that we are on the cusp of seeing AI-related malpractice lawsuits. Frankly, I’m surprised that we’re not already there, given how I see some of my colleagues using AI tools.

Quite a few knowledgeable clinicians, including clinical informaticists and AI researchers, understand the limits of AI. But large numbers of people are overly trusting of the content they see coming out of LLMs.

I’ve seen people cut and paste content containing obvious errors directly from a non-clinical AI tool into the EHR. I’ve also seen people operate wildly outside their scope of practice based on the ability to quickly access information that may or may not be accurate. Unfortunately, these are the situations where people don’t know what they don’t know, and LLMs can be extremely convincing even when they are wrong.

As an example, I recently saw a patient who was accompanied by a physician family member. The family member had a predetermined outcome that they wanted to achieve during the visit. They apparently thought that paying an $80 co-pay entitled them to see a physician who would suspend their professional knowledge and judgment and do the electronic equivalent of whipping out a prescription pad and ordering what they wanted.

I explained the clinical situation, the evidence-based recommendations, what I saw on the patient’s exam, what I had gathered from their history, and why I believed that the requested medication wasn’t appropriate in that scenario. The family member began arguing with me and was showing me his phone with his previous searches on the topic as a way to prove his point. Especially given that his specialty training wasn’t even close to the body system in question, he wasn’t aware that the articles being cited were only tangentially related to the diagnosis.

Fortunately, I’ve spent the last couple of decades working with patients who bring their internet research to the visit. I’m pretty good at educating while arriving at a plan of care that is mutually acceptable. However, I don’t have a lot of experience arguing with a peer who is putting blind trust in the output of a generative AI tool, so it was new territory.

I used my emergency department-mandated de-escalation training, so we managed to make it through the visit once one of the other family members in the room made the physician family member leave. With situations like this on the daily, it’s no wonder that clinicians have lost the joy in medicine. Having to argue with AI-generated errors when a patient’s health is at stake is something that none of us signed up for.

Mr. H also mentioned ECRI’s annual list of technology hazards, and I was gratified to see one of my soapbox issues in the number two position. “Unpreparedness for a ‘Digital Darkness’ Event” is a fancy way to say that an organization isn’t ready for an unplanned downtime. Maybe making it sound more exciting will convince people that they need to do something to get ready.

We should all know that cyberattacks are a “when” situation rather than an “if” these days, and that network or vendor outages are entirely possible. For clinicians who have always been dependent on the tools and safeguards that are built into the EHR, having to work without those can be frightening. It’s one thing to not have calculators or references at your disposal, but not being able to see the overall picture of what’s going on in the intensive care unit at full capacity is something else entirely.

Those of us who practiced in the olden days remember the large paper ICU progress notes that were the size of a poster board, but could fold up to fit in a standard medical chart. With just a glance, we could quickly figure out what was going on with a patient and formulate the best questions to ask during shift change.

The availability of electronic dashboards and monitoring suites has rewired those parts of my brain, but I bet that mental model is still in there somewhere and I could access it in a pinch. We need to remember that soon there will be more clinicians who have never seen that kind of paper documentation than those who have, and adjust our downtime preparations accordingly.

Are you prepared for a digital darkness event? Have you experienced any outages due to snowmageddon? Is your hospital treating staff who have to stay overnight in the facility like a blessing or a burden? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 1/22/26

January 22, 2026 Dr. Jayne No Comments

image 

The American Board of Preventive Medicine is notifying candidates that they have successfully passed the Clinical Informatics board certification exam. The certifications are retroactive to January 1, 2026. Congratulations to all the new Diplomates, and welcome to yet another continuing certification process that will have you asking yourself why you decided to become double-boarded.

From Straight A Student: “Re: online registration form for a training course that I completed recently. Prompts were in a ‘are there any’ format that asked about mobility restrictions or food allergies. A dropdown choice list appeared to be pre-populated with ‘none.’ My answer was ‘none’ for all of them, so I tried to just submit the form, which popped me back to the top with no feedback. The course vendor responded to my help desk ticket to say that the dropdown requires choosing ‘none’ and people miss that all the time.”

These sorts of Process Improvement 101 issues drive end users batty. The time wasted by users and the help desk adds up.

I wonder if user acceptance testing was done, since it should have been caught. Sometimes teams give the users detailed testing instructions outside of the application, such as “click here, then choose that,” which makes it impossible to determine how they will interact with the workflow. I also wonder if they are analyzing call volume to to identify ongoing issues. Then, has the help desk team reported the issue to development and asked for an update?

It feels like it would be more efficient to change the default to “please select from the list” or “choose a response.” Or, to add a page instruction telling users what to do.

I have been in countless conversations about the safety of healthcare AI solutions. I’m always interested in how the risks and benefits are portrayed to patients and other non-clinical, non-tech individuals. Mr. H mentioned a preliminary report by the VA Office of Inspector General that found that the Veterans Health Administration had some gaps in AI chatbot oversight. The story was also picked up by military-focused Task & Purpose, which ran its own version

Risks that were highlighted for the general audience included “producing misinformation, privacy violations, and bias, and that the systems had been put in place without review by the VA’s own patient safety experts.” I didn’t see mention of concerns that were noted by other publications, such as whether lags exist in providing current information for the LLMs to use.

An article commenter shared their physician assistant’s thoughts that “the AI is egregiously wrong 90% of the time, so he doesn’t bother with it.” Based on my own experiences with clinical-focused and consumer-focused AI solutions, that’s probably a significant exaggeration. I wonder if the user would benefit from additional education on prompt construction or effective use of AI tools.

The VA providers who I’ve talked to locally are happy with the AI solutions that are available to them. They are looking forward to continued expansion of their capabilities, such as helping craft more readable medical information for patients. If you’re a VA user, feel free to chime in. We can keep your comments anonymous.

I’m still in my New Year’s inbox cleanup extravaganza, and found an article about Hackensack Meridian Health’s canine-powered cancer detection program. The health system partnered with startup SpotitEarly for a clinical trial that examines the ability of trained dogs to detect cancer via patients’ breath samples. The goal is to validate the technique as a noninvasive cancer detection approach that might be more attractive to patients who are unwilling or unable to complete traditional screening recommendations.

The test is conducted by having patients breathe into a mask-like device for several minutes, followed by the dogs sniffing the devices. The dogs are trained to recognize odor signatures in the exhaled volatile organic compounds that can be associated with cancer. The dogs indicate detection by sitting next to a sample.

We know AI has to be involved somehow, and indeed it is. The company is using AI tools to document and analyze the behaviors of the dogs based on behavioral and physiological data.

SpotitEarly has been in the US market since May 2025, although it was founded in 2020. Previous studies of the technique found that the test was 94% accurate for detecting lung, colorectal, breast, and prostate cancers. If any readers are involved in the study, I’d love to hear about the “best boys” and “good girls” that are doing the sniffing and whether they prefer belly scratches or having their ears rubbed. My medical school had some public-facing research animals and they were the most amazing companions when they retired, resulting in a years-long waitlist for adoption opportunities.

Based on some of the other email traffic in my inbox, quite a few physicians made a New Year’s resolution to look for different employment. Several of them seem to think that informatics is something that you can just jump into because you are “techy” without any formal training or experience.

Some startups will hire clinicians in this situation, but I always encourage people to consider formal coursework to better understand the informatics landscape. I’m a big fan of the courses offered by the American Medical Informatics Association. The virtual courses are convenient, and the in-person ones are great for networking with colleagues working in the field.

A number of highly qualified clinical informatics physicians have recently been displaced from EHR vendors and health systems, so it seems that as long as mergers continue, the job market will remain challenging.

Are you looking to make a career change in 2026, and if so, how are you approaching it? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 1/19/26

January 19, 2026 Dr. Jayne 6 Comments

image 

Based on the contents of my inbox, it feels like everyone is talking about recent research from Michigan Medicine on emoji use in the electronic health record. The research letter was published in JAMA Network Open last week. It examined 218 million notes belonging to 1.6 million patients. Researchers found that emoji use was higher than previous studies that looked at clinical texting tools. 

The authors identified 372 emojis within 4,162 notes that were created during the last five years. Of those, 35% were patient portal messages to patients, followed by telephone messages at 28%, encounter summaries at 15%, progress notes at 14%, and patient instructions at 6%. The University of Michigan patient portal doesn’t support patients adding emojis to communications.

The smiling face with smiling eyes was used 1,772 times, with communications emojis such as the telephone receiver and calendar appearing 544 and 429 times, respectively.

The article contains an illustration of the 50 most commonly used emojis along with their official names. Just skimming through them, I’m not sure that I would come up with names or descriptions that matched their official titles.

Take the “briefcase,” for example. Could people be using it because it looks like an old-timey doctor’s bag? I can’t remember the last time I saw someone carrying a briefcase that looked like the emoji. Even looking at the most used one, the smiling face with smiling eyes, I would describe that one more as blushing than having smiling eyes. I also would not have correctly described “beaming face with smiling eyes.”

Some of them were new to me, including “busts in silhouette” and “bar of soap.” The latter got me thinking about how many people actually see or use bar soap these days, given the popularity of liquid soap and body wash products. Similarly, how long will it be before people no longer identify a “telephone receiver” as such?

I wondered about the context for some of the emojis that were used, such as the “P button,” the “small blue diamond,” and the “round pushpin,” and how they might be used in medical communications. The most concerning to me was actually the least used, the “police car light.”

Researchers note the risk of confusion in using emoji to communicate, especially in older patients. While most emoji use occurred among tweens and teens, patients in their 70s had the second highest usage. The authors call for organizations to develop guidelines to promote clear communication and professionalism in clinical communications. I once encountered someone who used a particular emoji extensively before discovering that it wasn’t a Hershey’s Kiss, so I agree with the concern.

The authors go on to note that measuring emoji use is just the beginning, and that future investigation should look at how emoji “might affect patient understanding, trust, and outcomes – and explore whether these playful digital symbols offer new opportunities or pose unintended challenges in electronic health record communication.”

One of my close physician friends sent me a link to a Facebook post about the article. It had some pretty funny comments about which commonly used emojis were missing from the study, along with those questioning whether the AI tools clinicians are using to write messages were responsible for the addition of emojis. A couple of commenters thought the research was frivolous, but those sentiments were countered by others who were clearly concerned with the potential impact on patients.

Another colleague with ties to Michigan Medicine said that emoji use in the medical record was prohibited, although he wasn’t able to find the specific policy. He said that he remembered a conversation with risk management where it was discussed, however, and that there were significant concerns about the meaning of symbols within the context of the legal medical record. Although the policy could have been changed, I’m wondering whether some clinicians still haven’t fully internalized that the patient portal is part of the legal medical record.

He said he’s not opposed to their use, especially with pediatric or teen patients with whom clinicians are trying to build rapport. Still, he advises residents that if deleting the emoji changes the meaning of the message, either the emoji shouldn’t be used, or it should be supplemented by actual words.

I was curious about the previous research that looked at clinical text messages. In 2023, clinicians from Indiana University School of Medicine looked at the content of messages that were sent by hospitalists who used a secure messaging platform during 2020 and 2021. Messages with emojis were identified, as well as those with more old-school emoticons.

The authors found that the majority of the emojis and emoticons “functioned emotively, that is, conveyed the internal state of the sender” where others “served to open, maintain, or close communication.” The authors also noted that “no evidence was identified that they caused confusion or were seen as inappropriate.” They concluded that “these results suggest that concerns about the professionalism of emoji and emoticon use may be unwarranted.”

I believe that differences exist in how clinicians communicate with each other compared to how we communicate with patients. In the former, we are more likely to use medical abbreviations or jargon. With the latter, we should be using terms that are more clearly understood by patients. In my experience with peer review, communications with patients are typically held to a higher standard.

It will be interesting to see what kinds of guidelines or policies organizations come up with as far as regulating the use of emojis in patient communications and charting. I reached out to medical staff leadership at the facilities where I’m affiliated, and none of them recalled this topic coming previously.

I found citations for a half dozen other articles that looked at the content of clinical text messages among hospitalists and other members of the clinical team, as well as norms for emoji use. I didn’t have time to go down that particular rabbit hole this weekend, but I would be interested to hear from readers that have strong opinions on emoji use or those who have been involved in this type of research.

Do you use emojis in patient-facing communications? If so, how do you use them? If not, what do you think about the practice? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 1/15/26

January 15, 2026 Dr. Jayne 2 Comments

Plenty of people have been asking me for my thoughts about last week’s announcement of OpenAI for Healthcare.

Models that are tuned to physician needs and that have been through robust clinical testing certainly offer advantages. The incorporation of the organization’s internal documents via SharePoint and other platforms is also attractive.

I recently chatted with a friend who is both a physician and an attorney about the impacts of such integrated solutions on the medicolegal landscape.

In the current state, with many physicians playing the “bring your own AI” game and using various solutions on their phones, no connection exists between those queries and the legal medical record. However, an enterprise platform that ties it all together and specifically encourages the use of patient data and PHI adds an additional layer of complexity to medicolegal investigations.

It won’t just be about the EHR and its audit log. It will involve all the potentially related queries that may have been entered and acted upon by the care team. We’re starting to see some legal activity around physicians who based their decisions on inappropriate AI-generated information. This is an area to watch.

I also wonder about the ability for hospital policies to negatively influence access to information by clinicians. For example, if you work in a hospital that restricts certain procedures or medications for religious reasons, how will those limitations shape the responses when those prohibited treatments might be the right answer for a patient?

This could evolve to include a bedside component for patients. They could ask questions about their care plan while hospitalized. However, they might learn that their care is limited by their choice of facility.

My conference BFF Craig Joseph, MD recently wrote that healthcare is betting on the wrong AI instead of looking at solutions that actually improve clinical outcomes. He cites a study from the University of Southern California that found that physical robots outperformed chatbots in reducing psychiatric distress. He goes on to talk about how the brain perceives interactions when there is a physical presence compared to a virtual one and about the benefits of emotional experience in delivering care.

It made me think of my own experiences with physical therapy. It’s an advantage having your friendly (or not so friendly) physical therapist right there urging you to push yourself compared to a therapy bot at home that is less perceptive when you’re slacking off.

The robots used in the study looked fairly low-tech and had crochet covers, reminding me a bit of the cats in Disney’s “Lady and the Tramp.” For a tech industry that focuses on flashy products, these wouldn’t even be on the radar. I agree with Dr. Joseph that sometimes low tech is best. Maybe we’ll have to make that the focus of our next conference booth crawl.

Speaking of low tech, I was talking with a couple of physician friends recently about the Oura ring as a potential adjunct to addressing sleep issues. One colleague swears by his, although the actions that he has taken based on the ring’s sleep data are the things that every family physician recommends for sleep issues: consistent routine around and time for sleep, adjusting environmental conditions, appropriate timing of meals, and keeping a basic sleep diary to identify triggers.

My other colleague proposed a decidedly low-tech approach: sleeping with a stuffed animal. He pulled out his phone to share a Wirecutter blog from last year that addressed the tactic. It cites several scholars and their comments on the practice, including notes on how it might help adults shift from a state of cognitive arousal to the more relaxed mindset required for sleep.

The blog notes the lack of literature on adults sleeping with stuffed animals, but I bet if we threw some AI into the mix, people would be eager to study it. Maybe those crochet cats can work the night shift as well as having a day job.

image

From Night Nurse: “Re: my annual refresher training. Passing pre-tests exempted us from that section. This was one of the questions. What kind of world are we in where this is considered an appropriate question?”

I have unfortunately seen some bad behavior from healthcare providers during my career, so I agree that we should be screening for people who have thoughts like this. I don’t think a bold annual training question is the way to pick them up. Even in a written survey, I would probably recommend a more subtle approach to identify those who have such sentiments. I’ve done a fair amount of work writing test questions and I wonder what the hospital’s item writers were thinking with this one.

From Tech Traveler: “Re: swearing. I’m a medical device representative and read your blog to keep up with healthcare tech topics so I can commiserate with the physicians I call on. I’m in and out of operating rooms and physician lounges all day and notice that there’s a certain amount of swearing that goes on among physicians, but it seems to vary by specialty and age as well as by topic. I’ve joked about doing a research project to explain the phenomenon, but it looks like researchers beat me to the punch.”

The article notes that although swearing is “often dismissed as socially inappropriate,” it has been linked to increased physical performance through state disinhibition. That is a psychological state in which individuals are less likely to restrain their behavior. The authors propose that this leads to flow, confidence, and focus, with those who swear being able to perform better on strength and endurance tasks than those who used neutral words.

They note that “these effects have potential implications for athletic performance, rehabilitation, and contexts requiring courage or assertiveness. As such, swearing may represent a low-cost, widely accessible psychological intervention to help individuals “not hold back” when peak performance is needed.”

Another one of the practices where I receive care has finally given in to the private equity company that has been pursuing it for the past couple of years. The physician mentioned this at a recent visit and shared the behind-the-scenes story. She has been struggling since she opened a second location, but has been keeping her head above water through the availability of same-day dermatology appointments, which turned local primary care doctors into a loyal referral base.

We’ve all been impressed by her ability to fit people in. Who doesn’t love being able to have a patient’s suspicious lesion removed in a timely fashion? Before she opened, patients often waited months for appointments.

Although she offers some cosmetic dermatology services, the practice is heavily skewed towards medical dermatology. She shared that automatic payer downcoding has been financially devastating. Her attempts to promote the more lucrative cosmetic treatments, which are typically cash pay, couldn’t compete with local med spas that run coupon specials. She decided to give in with five years to retirement. We’ll see how well that same-day availability holds up with private equity operations leaders at the helm.

If your care providers have been acquired by private equity, what changes have you noticed? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 1/12/26

January 12, 2026 Dr. Jayne 1 Comment

The New York Times ran a piece this week about “The Tech That Will Invade Our Lives in 2026.” The author aims to sort out which innovations will be impactful and which are fads that can be ignored.

Item number one on the list is, “We’ll finally be talking to our computers.” It’s more focused on having AI chatbots represent themselves with humanlike voices than on having them be able to better interpret conversational prompts, unfortunately. If we can get to the place where AI assistants act more like the computer in “Star Trek” and less like a recalcitrant middle schooler, I’ll be pleased.

Another item on the list refers to the search for “a successor to the smartphone” and offers smart glasses as an option. I don’t necessarily need a successor to the smartphone, but what I’d like to see is the ability to broadly operate smartphone apps on my laptop.

As an example, many of the hotels I frequent have begun providing menus of services via a QR code in the room. That’s great, but I would rather not read those documents on my phone when I have a perfectly good laptop right there on the desk. My workaround is to scan the code and send the link to myself so I can open it on the laptop, but that’s a nuisance.

I don’t know why the hotel can’t display that information from a link on its website. That would be ideal not only to enable guests to use their devices of choice, but also to allow travelers to get the information they need before they reach the hotel room.

I have my own personal list of tech I wish would invade the workplace.

  • Let’s start with the ability to ask Microsoft Windows to find a setting for you that used to be easy to find prior to Windows 11 and now is in some obscure place with an obscure name.
  • I would also like to be able to ask an AI assistant to do things like, “Find me that email that was sent by a member of the training team within the last three weeks that was talking about some weirdness with one of the clinical alert popups” when I accidentally file something in the wrong folder and can’t remember who sent it.
  • Maybe we can get the ability to set up an automatic reply to emails where people ask you about meeting at a specific time and neglect to mention which time zone is in play.
  • Just as a nice-to-have, I’d like a rule to highlight meetings in a particular color based on whether there are external attendees on the invite list rather than having to do it manually as meetings come in or as a retrospective exercise.
  • Last but not least, at the top of my wish list are upgrades that don’t break user workflows. I know that’s a lot to ask for, but a girl can dream.

What are others looking for in an AI tool? I did some casual investigation and found strong sentiment for pushing AI to handle mundane or data-heavy tasks rather than creative pursuits. “I want AI to balance my checkbook and categorize all my expenses, finding the problem when things don’t match up. That will give me more time for my hobby of photography. I don’t want AI making pictures for me.”

One person I spoke with wanted to be able to adjust the AI behind social media algorithms. She wants to stop seeing things that she doesn’t want to see and see more of those she is missing. That led to a conversation about why algorithms work the way they do.

I was surprised by this person’s lack of understanding of how social media platforms make money. It made me wonder how many other people out there have the same knowledge gaps. 

One person I spoke to was excited about self-driving cars, especially for individuals as compared to the taxi-style use case. “I was in Europe earlier this year and made good use of their robust rail infrastructure. Now that I’m back in the US, I realize how pathetic the long-distance options are if you’re not on the east coast. We have several major cities in my state that are all about 90 miles apart, but there is no easy way to get to them other than driving your own car.”

One of my snarkier colleagues commented, “If it’s so easy to use AI to write code, why can’t Microsoft figure out how to get feature parity between new and classic Outlook, or between either of the desktop versions and the web version?”

Another noted that he wasn’t against AI innovation, but felt that advancements were coming so quickly that there wasn’t enough time to process how they might be useful in the workplace or at home. He said he was reluctant to get excited about anything because once you do, it’s already been surpassed and you have to adjust to something new. That’s a valid point.

I was surprised at the response from one of my junior colleagues who said he felt that he was late to the game for actually caring about or using AI, and that, “It’s getting added into everything but not necessarily for good reason.” He uses it to help summarize documents, write letters of recommendation, and build patient education content for his niche specialty. He hasn’t found many other good uses for it.

One of my IT colleagues said that he wishes it was better at manipulating data, along the lines of “Find the data in spreadsheet A that corresponds to spreadsheet B, and append spreadsheet A with the values for X, Y, and Z.” He also had me chuckling with his request for calendar management tools that will automatically reject meetings that are sent without agendas.

One of my foodie friends had an item on her wish list. “I’d like AI to keep track of everything that’s in my pantry, refrigerator, and freezer and cross index it with my recipe files and a list of what I’ve cooked recently so I can ask questions like ‘I’m in the mood for pasta, what can I make with ingredients that are on hand that isn’t similar to anything I’ve made in the last 30 days?’” In addition to helping people reduce waste with outdated ingredients, it might contribute to the household harmony where staring at each other and asking what to have for dinner is the norm.

I’m sure we have all heard that adage that today’s AI is the worst it’s ever going to be. Although blips exist, it will continue to evolve.

What do you wish AI would do for your workplace or in your personal life? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 1/8/26

January 8, 2026 Dr. Jayne 1 Comment

Mr. H’s mention of a recent article caught my attention. It says that 40 million people are using ChatGPT for health-related questions every day.

I agree with the article’s statement that people are “turning to AI tools to navigate the notoriously complex and opaque US healthcare system.” They mention patients using it to decipher billing statements, appeal insurance denials, and answer clinical questions “when access to doctors is limited.”

Another statistic that caught my attention: more than 5% of ChatGPT questions are about healthcare, and 1.6 million questions per week are asked about health insurance.

Clinicians certainly can’t fault patients for using AI tools when they are doing the same. I see physicians every day using AI to write insurance appeals and create patient-facing communications, not to mention all the AI-powered documentation. The risk of hallucinations remains a major concern. Some care delivery organizations have applied their “we can’t control it so we’ll just ignore it” philosophy. 

I would instead encourage organizations to make better use of their existing tools in providing accurate and vetted information to patients. Those institutions that offer robust patient education and engagement solutions should feature that information prominently on their websites and within their patient portals. Patients would be able to self-serve with reputable information.

Clinicians need to look at patient education less as a check-the-box exercise and more as a key part of patient care. In my experience, educated patients who have access to resources that they can consult down the road are less likely to send patient portal messages or call the office with basic questions. They feel more confident about their care and their ability to manage at home.

Another juicy tidbit from the report: 70% of health-related ChatGPT queries occur outside of normal medical office hours. Most medical offices are open for about eight hours per day, usually overlapping the same work hours as people who also work traditional schedules. It’s difficult for many patients and caregivers to get the information they need during the hours that they are available. Patient portals and secure messaging have helped this issue somewhat, but gaps still exist.

In addition to making sure that patients know how to access trustworthy patient education materials, care delivery organizations should do a better job promoting other patient-facing resources, such as after-hours nurse triage lines or on-call services. Organizations that are actively managing risk do a better job with this, because they are incentivized to keep patients from going to the emergency department.

It would be interesting to compare after-hours use of generative AI solutions by patients who have access to after-hours services and those who don’t. Anyone up for some research?

From Midwest Gal: “Re: portal messages. You mentioned waiting for test results, received a patient portal notification that you had a message from the physician, and it turned out it was a general message about holiday hours. The same thing happened to me right before the Christmas holiday. Instead of getting my mammogram results, it was a reminder that the office would be closed.” I reached out to some folks who are experts in the EHR that the reader’s site uses. They said that using the patient portal in this manner is not a best practice. For the love of all things, if you’re on a patient portal team, please work with the operations teams that are sending these messages to help them understand the anxiety that they are causing.

Speaking of anxiety, the clinical trial in which I am a participant published some of its results recently. However, it didn’t bother to notify patients that this would be happening. Those of us that are clinicians saw it in the journals first, which was bad enough. To make things worse, the research team released new recommendations to patients several days later, some of which provided guidance that is counter to the standard of care. That was accompanied by no explanation.

This occurred the week of December 18, when many people are frazzled by year-end work responsibilities or holiday preparations. I can’t imagine a worse time to release that kind of information.

I reached out to the study coordinator with my questions. I didn’t receive a reply within the published service level, so I reached out again via a different method. Guess what? They were experiencing a high volume of calls and were short staffed due to the holidays. The local physician who had referred me to the study wasn’t aware of either the published article or the communication to patients. You really cannot make this stuff up.

From Burned Out CMIO: “Re: help desk. My large health system outsourced its help desk functions at the beginning of December with the assurance that we would see no degradation in service levels. I had complaints from my ED physicians, who said that their tickets had been closed due to lack of customer response. Help desk staff were emailing the physicians about their tickets, then closing them as unresponsive if they didn’t hear back within a few hours. We’ve been having some serious conversations with the vendor about how that’s not how it’s supposed to work, especially for shift-based physicians who might not be able to respond quickly and then might not be working the next day. Ambulatory physicians ran into issues during Christmas week when offices were closed some days, then came back on Monday to find their tickets closed due to ‘no response from customer.’ Everything blew up over the New Year’s holiday, when tickets were closed in bulk on the 31st to meet meet end-of-year service level metrics. I feel awful because people who I had worked with for years were laid off in favor of the allegedly cheaper outsource firm.”

In situations like this, you can’t put a price on the knowledge of former help desk staffers who understood user and office work schedules around the holidays. I wonder if this outsource firm has any healthcare experience. This falls into the category of “you get what you paid for.”

I hope that a robust review of service level expectations happens again and that ticket closure goals are moved out a bit to accommodate the behaviors of real users in the healthcare setting. I can just imagine people trying to slam tickets shut to meet the metrics, not realizing that users have valid reasons for not responding quickly.

What’s the most foolish outsource maneuver your organization has made? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 1/5/26

January 5, 2026 Dr. Jayne 1 Comment

image

People embrace many traditions to ring in the New Year. My extended family enjoys Hoppin’ John, but my personal ritual is to skip straight to dessert.

I started at midnight by toasting 2026 with an assortment of delightful tarts. I then kept my energy up on New Year’s Day with Fluffy Frosted Orange Rolls, a delightful alternative to cinnamon rolls. Fortunately, the sugar boost helped because I was working clinically later that day.

Nearly every patient I treated had influenza. If the “flu-pocalypse” has not made it to your area yet, chances are it is on the way. If you are at high risk for influenza complications or simply want to avoid forced downtime, I recommend masking up in crowded places.

I had the opportunity over the weekend to chat with several physician executive colleagues. Each shared ideas about what to expect in the coming year.

  • Hospitals will focus on cost control, especially those that have high numbers of Medicaid and uninsured patients. For organizations that have not outsourced functions such as food service or human resources, doing so may look more attractive. One local hospital has dramatically cut non-patient food service, making it difficult for night-shift workers to get a hot meal. Overnight options are limited to self-service, with only a couple of microwaves available in the cafeteria. Since the hospital is already outsourcing, may I suggest a third-party food truck? Staff would love it, although the food service vendor might not.
  • Hospitals will continue to scrutinize pricing for everything from software to patient care supplies to landscape maintenance. Organizations that are not already doing this need to start. One health system is trying to trim several million dollars from its technology budget and is taking steps it would normally avoid, such as asking vendors for discounts mid-contract. Its EHR teams have not attended conferences or user group meetings for the past three years due to budget constraints, and they do not expect that to change. As an interesting side note, leadership teams are also skipping these events, so at least they are showing solidarity.
  • Primary care physicians are extremely worried about patients who have let their insurance coverage lapse due to rising costs. A major concern is that those patients, along with those who still have insurance but now face high deductibles, will avoid seeking care. That avoidance could lead to poorer outcomes and higher costs overall. The old adage about an ounce of prevention being worth a pound of cure does not resonate with people who cannot afford preventive services. A gastroenterologist in the group noted that a cash-pay colonoscopy costs $2,200 at her surgery center, which limits demand. Some patients instead choose cheaper screening tests that may not be appropriate for their individual risk profiles.
  • Many suspect that mergers and acquisitions will increase as organizations try to scale for contracting leverage with vendors and payers. Smaller community hospitals will face greater challenges, particularly if they lack natural partners. The group universally agreed that more practices will sell to private equity firms.
  • Medicare Advantage plans will continue their efforts to grow market share. One group I know is expanding into new markets that are not traditional retiree destinations, such as Wisconsin and Missouri. Physicians are intrigued by promises of employment and robust care team models, but they should perform due diligence. Speaking with former colleagues who had poor experiences could be particularly informative.
  • Organizations will keep adopting AI solutions, especially for ambient documentation and revenue cycle management. Leaders still express concern about AI use in research and treatment planning, which is driving tougher questions about hallucination risk and patient safety. One leader whose organization has gone all-in on AI-based revenue cycle tools said the results are no better than human performance, but the tools are far cheaper than even offshore labor.
  • Regarding the EHR market, the group agreed that Oracle Health / Cerner will continue to struggle and will lose customers to Epic. Sentiment was cautiously optimistic that smaller platforms, such as Meditech and Altera, will hold their ground. Informatics leaders wonder when consolidation will begin in the ambient documentation space, given that a few clear leaders have emerged.
  • One leader is especially excited about 2026. He oversees a relatively new primary care residency program that has been approved to expand its class size in the next match cycle. The program is based at a community hospital rather than a major academic center, and competition for the July start slots was intense. He expects applications to rise further as the program builds a reputation for training strong community-based generalists rather than subspecialists. Kudos to him and his team. I look forward to seeing how the next year unfolds.

During the discussion, I learned a new term: job hugging. It describes people who dislike their current roles but stay put because they fear that moving elsewhere could be worse. At least two participants admitted to this mindset. They worry that other environments may be just as toxic, if not more so, and that mid-career physician leadership roles are increasingly vulnerable to downsizing.

One person noted, “If I’m at risk for a layoff, I would rather stay where I have been for 15 years so I might receive a severance. If I start somewhere new and similar cuts occur, recent hires will not get anything.” Another said he would consider consulting but is too concerned about the cost of health insurance to make the leap.

How did you ring in the New Year, and what are your predictions for 2026? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 12/29/25

December 29, 2025 Dr. Jayne Comments Off on Curbside Consult with Dr. Jayne 12/29/25

image 

As we approach the end of the year, many of us are reflecting on our accomplishments for the year. Maybe we’re proud of the work that we’ve done, or perhaps we are forced to reflect because of end-of-year performance reviews. I enjoy thinking through how I spent my time and how it might have impacted patients.

I asked some of my CMIO colleagues what they are most proud of this year. Many of the projects were predictable, but at least one was surprising.

The first CMIO who weighed in was a little embarrassed about his accomplishment. Apparently his organization never got the memo about the benefits of having proximity cards or other non-password technology to help reduce the burden of multiple logins for its clinicians. Mandatory EHR upgrades or replacing a solution that was about to be sunset always took precedence. A couple of recent cybersecurity events had also consumed a good chunk of the budget and pushed other needs and wants aside. I certainly understand having to spend money on that.

Regardless, the clinicians are happier not having log in while going back and forth to the workstations in patient rooms, so that’s a win for the year.

The next physician leader was passionate about expanding virtual physician services in the emergency department. His organization’s busiest hospitals put a physician assistant in the triage bay. They worked closely with nursing staff to perform workups on patients who were still in the waiting room. The PA examined the patient and entered orders. 

When wait times were at their worst due to bed shortages elsewhere in the hospital, some patients were actually discharged from the waiting room without ever making it to a regular emergency department bed.

The twist this year was using virtual technology to expand that to hospitals that didn’t have the volumes to support the provider-in-triage concept. He felt that it was a win all around. Patients were happier to get their care started more quickly, emergency department staff members were happier because they had fewer patient complaints, and emergency providers were happier because they could opt in to the remote shifts for a break from the ED’s physical grind.

This is a great strategy. I am surprised to see so few facilities creating programs like this. It improves key metrics like the door-to-doctor time, addresses bed turnover issues, improves satisfaction, and provides options to keep physicians in the game when they might be ready to retire. The physician workforce crisis isn’t going away anytime soon, and anything that we can do to maintain those folks and their expertise is good.

I know of another system that has implemented this paradigm. Remote shifts are staffed by people who might otherwise be on medical leave due to orthopedic issues or pregnancy complications, or who need to travel to another part of the country to support family members.

It’s inexpensive since the major investment is a workstation and cameras. Even if you have to do a little rearranging to accommodate a gurney in the triage area, it’s cheaper than building more emergency beds. Another significant factor is probably that hospitals can make a lot of money billing the provider portion of the visit rather than having patients leave without being seen.

Multiple CMIOs said that ambient documentation was the best solution that they implemented all year. Most of them had pilot cohorts that tested the technology first, and at least a couple of them went through a bake-off process where they trialed solutions from different vendors before making their final selection.

One CMIO said, “This is one of two things that I’ve ever implemented that my physicians thanked me for.” Most of them are implementing the technology in ambulatory environments. Only one who I spoke with had a significant project for inpatient wards, and that is in a facility that has 100% private rooms for its patients.

I loved the idea that one correspondent shared about how her facility trained the ambient documentation tools. They created a curriculum called “Caring Out Loud” that addressed how physicians needed to change their history-taking and examination skills for the best outcomes with the technology. Some physicians felt like “talking to themselves” made them seem less professional, but only two of them chose to go back to traditional documentation.

Virtual nursing was also a big win for one CMIO who responded. In a plot twist, this CMIO is a nurse practitioner. Although I’ve seen people in similar roles elsewhere in the industry, she’s the first non-physician CMIO who I’ve gotten to know personally.

Her facility has been able to move approximately half of the steps involved in the nursing admissions process into a virtual workflow, which has been helpful as they continue to have staffing challenges. At their facility, all nurses work at least one virtual shift per month so that everyone is cross-trained. All of the virtual nursing work happens on site, which is different than other models where virtual nursing is used to retain staff that otherwise might be ready to leave bedside nursing.

One respondent’s biggest project was a deterioration prevention system that identifies patients who might be heading towards a crisis. I was surprised to learn that one of the major challenges in that effort was the change management piece. It was not designed to bypass human intervention, but people felt that its use might discourage them from raising an alarm if they suspected that patients were having issues.

The hospital held listening sessions so that staff understood what the system was designed to do, and what it was not. They were made aware that they needed to still rely on their internal “Spidey sense” if they felt that a patient was at risk.

I was surprised that AI projects, other than ambient documentation, were far down the list for many of the people I spoke with. That could be an artifact of budgeting processes, where priorities for 2025 may have been set in the summer of 2024. Or, perhaps skepticism remains around AI and how it should fit into the bigger picture of patient care.

I also think that many facilities are playing catch-up around operational and quality debt and therefore have less time to spend on shiny new things. I’m glad to see those institutions focusing on the basics, because if you don’t have a good foundation, everything else is just window dressing.

What are you most proud about in your work during 2025? Do you have a focus you’re excited about for 2026? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 12/22/25

December 22, 2025 Dr. Jayne 1 Comment

I’ve been on LinkedIn almost since its creation. When I joined, it seemed like a great way to keep track of people I met in the course of my work.

Over the past couple of years, I feel like it has lost its usefulness. My main feed seems to be full of vendor ads, punctuated by individual posts that are annoyingly self-promoting and contain way too many emoji. I feel like I have to weed through all of that to find things that are genuine or feel like something more than just an attention grab. When I look at the messages features, it seems that most of the people reaching out are trying to sell me something.

Looking through the last couple of months of messages (which I rarely check, ignoring the notifications that come into my inbox as well) I saw a half dozen solicitations from financial advisors. Based on the content of those messages, they are clearly targeting physicians. In particular, those who are on the downhill slope towards retirement.

A couple were looking for people to invest in various new ventures. At least for me, if you have something like “turning income into legacy” as your headline, your message is guaranteed to go straight to the trash. You’re also going to be ignored if your outreach looks like multilevel marketing.

I also tend to get quite a few messages from people trying to sell services to physician offices. Things like revenue cycle management, bad debt management, collections, phone services, call centers, and the like. If they read my profile for more than two seconds, they would see that I haven’t been in traditional practice in a long time and don’t need any of their services. Their messages are also routed to the discard zone. 

You’re also likely to wind up in that place if you include a personalized message that’s addressed to someone other than me, as the person did this week who started his message with “Dear Correen, It was great to meet you last week.”

Then there are the entrepreneurs who are trying to connect with “like-minded individuals” and who are “interested to hear your opinions” or something similar. One said he was “having conversations with several of my colleagues and would love to hear how you’re navigating the current landscape.”

Based on reading this person’s profile, I can’t even begin to figure out what specific landscape he might be thinking about, let alone how I might contribute. In the past, when I’ve seen messages like this, they have felt like someone who is just trying to get some free consulting.

I got an entertaining spam message this week for a free brow waxing session at a business that plans to open in 2026. It is trying to generate Instagram likes by contacting random people on LinkedIn and requesting that they follow him and/or his business on that platform. The message was from someone listed as a “verified recruiter” with a corporate license. For entertainment, I clicked on his profile, and found that in addition to owning the waxing business, he also owns a burrito restaurant, a carpet cleaning company, and a hair salon. Needless to say, that was a quick delete as well. 

I also get a kick out of seeing the reports of how many people viewed my profile. Quite a few recruiters made the list. Normally if a recruiter reaches out and asks to connect, I will accept the request just to see if they have interesting roles available. Not that I’m looking, but I have plenty of friends and colleagues who are, and I’m happy to help them out if I see something that’s a good fit.

Most of the time, there is some brief back and forth. I let them know that I’ll be sharing their opportunities with others, and then that’s the end of it. This week, however, I had a plot twist with a recruiter that I hadn’t seen before.

I accepted the recruiter’s connection request, so they could see my email information. They apparently used that, as well as the information in my profile, to enter me into their organization’s “Talent Community” as if I were job hunting. They also created referral links for several specific jobs and invited me to apply, as if we had discussed those jobs and I had voiced interest.

I know from my own experiences in large organizations that usually if you’re trying to score a bonus by referring someone, you have to at least attest to the fact that they were aware you’re referring them and agreed to it, so it felt a little odd. Maybe this particular organization plays fast and loose with their referral process.

The roles for which they created referral links were highly specific. It was clear that they had read my profile in detail and were targeting particular skills and certifications that I list.

I know that this particular organization is going through an EHR change. Several of the roles were related to that project, although one role was for a position with a title that was identical to my current role.

This is certainly the first time I’ve experienced this kind of recruiting flow. I’m wondering if it is unusual, or if this is a new way that organizations are trying to source people. Since it’s the end of the year, maybe it’s just someone trying to hit a quota, but who knows. If you’re in the human resources or recruiting realms, I’d be interested to hear what you think of this approach and if it’s common or more of an outlier.

image

I’m glad Mr. H mentioned celebrating Yalda, which marks the passing of the longest night of the year and the return of light as days gradually grow longer. For the last couple of years, I’ve noticed that the shortening days have played havoc on my sleep schedule, to the point where I’ve tried to spend as much time in more southern latitudes as my work allows, and it’s been helpful.

This year, I was invited to a celebration. Although I wasn’t able to stay until dawn, I really enjoyed the opportunity. Although I do like a good New Year’s Eve party, Yalda Night was more cozy than blingy and felt like a better way to reset in preparation for the new year.

This year has been a tough one for me personally so I’m all about celebrating hope and renewal as we head towards 2026. Given the way the US health system works, however, I’m not looking forward to the resetting of my health insurance deductible, but there’s not much I can do about that.

What is your favorite way to mark the passing of the years? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 12/18/25

December 18, 2025 Dr. Jayne 2 Comments

I interact with medical students and residents from different institutions. I have learned that the education that they receive about AI and its role in healthcare is highly variable.

The American Medical Association is taking a run at addressing this problem. I’m glad to see someone calling it out, but unfortunately, AI tools are already deeply ingrained in user workflows. Like anything in life, it’s difficult to undo bad habits, especially when they are perceived as creating value. 

A resolution was introduced at the AMA Interim Meeting to create a policy that supports the development of “model AI learning objectives and curricular toolkits.” These would be aligned not only with AMA policies, but also the principles of the Association of American Medical Colleges. The AMA also plans to work with medical organizations to identify AI literacy elements, support CME offerings on the topic, and advocate for funding and resources to promote AI training initiatives.

From Jimmy the Greek: “Re: the holiday gift that keeps on giving. My employer just dropped its new in-office requirements for those who live within a certain radius of one of our locations – four days per week, eight hours per day in the same office. People leaders must be on site for at least one week per month, meaning that our boss will travel 12 hours to the mother ship. It’s going to be a huge waste of money. They are trying to sell it by promising contests and celebrations. It also appears that part of their ‘enhanced office experience’ includes setting the paper towel dispensers in the restrooms to give you about three inches of paper towel per wave with an eight-second timeout. How about letting me enhance my workday by allowing me to effectively wash and dry my hands during cold and flu season?”

I theorize that this organization is trying to lose people through attrition by tightening its control over work locations. I’ve seen companies use this strategy when they’re trying to unload late-career remote employees who don’t want to do the travel and who are likely to be higher on the pay scale than others.

The talk of expanded benefits to being in the office seems like a standard corporate attempt to justify imposing a policy that doesn’t make sense for everyone. I’ve worked in-person, hybrid, and fully remote. All of them have pros and cons depending on the company’s structure. For teams that work closely together, physical proximity can be an advantage. However, making someone go to an office four days a week when none of their team members work there is just silly, as is policing the restroom supplies.

A colleague clued me in to a New York Times article about a writer who tried to spend 48 hours without using any AI technologies. He was surprised at the breadth of AI’s penetration into daily activities, including weather forecasting, environmental monitoring, and supply chain management. It must be noted that the definition of AI used in the experiment included both generative technologies and machine learning.

In addition to forgoing social media, the author also avoided podcasts (due to the potential for AI editing) and most news outlets as well as email services. The article jumped the shark a bit, however, when it discussed not using electricity or municipal water sources because they use AI demand prediction or monitoring. The author instead planned to drink collected rainwater.

Other out-of-bounds services included municipal trash service, because it uses robotic sorting machines and machine learning that streamlines collection routes. Cars were out, as were many modes of public transportation.

I chuckled at his description of trying to get to a meeting using a bicycle and a paper map, then foraging a meal in Central Park to avoid the influence of AI on the food chain. He also reverted to a landline telephone for communications and typed the article on a manual typewriter before discovering that the ribbon was dry and switching to pencil and paper.

The author admits that early on in his experiment, he ranked tools and services from 1 to 10 to represent how much AI was present. He then went forward with using low-ranked tools. I think we can all agree that asking ChatGPT to create random graphics for entertainment is different from using a municipal trash service, but the space in between is grounds for conversation about the impact of AI on daily life.

image

I don’t follow as much global news as I would like, so I was delayed in learning that Australia has instituted a social media ban for children under age 16. The effort is hailed as a way of putting control in the hands of families rather than with social media tech companies, although as expected, young people are trying to figure out how to get around the ban.

Social media platforms can be fined $30 million if they don’t remove the accounts of children. They are also required to describe how they implemented the restriction. Australia’s ESafety Commissioner will report publicly how well things are working before the end of the month.

Regulators know that savvy youth will use VPNs to make it appear that they are outside of Australia. However, one of them noted that the platforms have the power to identify those who skirt the rules by analyzing their posts.

I ran across another article that addresses the under-16 point of view. It featured comments from a teen who lives in the Outback, who worries about how he will stay connected with his friends who live far away.

I would hazard a guess that young people who are smart enough to set up international VPNs are also smart enough to solve the problem by embracing older technology with a twist. Radio was used in the Outback for years as a way for students to attend school, and amateur radio has become much fancier in the last few years with digital, text and data modes. Where there’s a will, there’s a way. I’ll have to ask my favorite ham radio operators if they are seeing an uptick in activity in the land down under.

The law is being challenged by teens who claim that they have a right to freedom of political communication, so we’ll have to see what happens next.

What do you think of social media bans for young people? Will they result in greater health and safety for that segment of the population? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 12/15/25

December 15, 2025 Dr. Jayne 3 Comments

A friend reached out this weekend to ask my opinion about the risks of plugging medical information into ChatGPT and other publicly available AI tools. She wanted to know if I agree with a recent New York Times article about it.

My first concern is with the accuracy of the medical information that is being fed in. My own records have contained a variety of misinformation in the last several years, including documented findings from exams that didn’t occur, incorrect diagnoses, and at least one document that was scanned into the wrong chart.

Smaller errors also occurred, such as inaccuracies in dictation / transcription that weren’t caught in editing. Although they don’t materially change the content of the record, I wouldn’t want them taken out of context.

The article starts with a scenario where a patient receives abnormal test results. She is “too scared to wait to talk to her doctor,” so she pastes the lab report into ChatGPT. It tells her that she might have a pituitary tumor.

This is a prime example of the unintended consequences of giving patients access to their lab results before the ordering physician reviews them. It’s the law, and patients have a right to their information, but it can be harmful to patients in some circumstances. I’m glad to see care delivery organizations giving patients the choice of receiving their results before or after they are interpreted by the care team.

Another scenario involved a patient uploading a half-decade of medical records and asking questions about his current care plan. ChatGPT recommended that the patient ask his physician for a cardiac catheterization.

The procedure was performed and the patient did have a significant blockage. However, it’s difficult to know what the outcome might have been had the original care plan been followed. The write-up of the scenario didn’t include any discussion of how things went when the patient pushed for a procedure, or if other ramifications, such as insurance issues, resulted from the pursuit of a higher level of intervention.

Most of the patients I see don’t fully understand HIPAA. They think that any kind of medical information is somehow magically protected. They don’t know what a covered entity is in the role of protecting information. They give away tons of personal health information daily through fitness trackers and other apps without knowing how that information is used or where it goes.

I personally wouldn’t want to give my entire record to a third party by uploading it to an AI tool. I don’t know how the tool handles de-identification and I’m not about to spend hours reading a detailed Terms and Conditions or End User Licensing Agreement. Based on the number of people who share their information in this way, it’s clear that many aren’t worried about the risks.

One of the professors who was interviewed for the article noted that patients shouldn’t assume that the AI tool personalizes its output based on their uploaded detailed health information. Patients might not be sophisticated enough to create a prompt that would force the model to use that information specifically, or might not be aware of instructions within the model to handle that kind of information in a certain way.

Assuming that you will receive a response that is tailored specifically to you can be challenging, especially since much of the medical literature looks at how disease processes occur across populations rather than for an individual.

The comments on the article are interesting. One cautioned users to consider using multiple models, asking the same questions, and having the models evaluate each other in order to make sure the output is valid. I can’t see the average patient spending the time to do that.

Others talked about how they’ve used ChatGPT to drive their own care. One commenter mentioned that she also used it to research care for her pet and to make adjustments to the regimen prescribed by her veterinarian.

Concerns were also expressed about the possibility for bias and advertisements to creep in, especially with the discussion of particular medications that are still under patent.

Several readers shared stories about AI tools giving wildly inappropriate care recommendations that could have been harmful if patients hadn’t done additional research on the suggestions. One specifically mentioned the AI’s “mellow, authoritative reassurance of the answers, in a tone not different from talking to a trusted and smart doctor friend” despite being “flat wrong on several points.”

Another reader mentioned that tools like ChatGPT  formulate their answers from materials that they find online. Unless you specifically ask for citations, it’s difficult to know whether the information is coming from a medical journal or an association dedicated to patients with a specific condition. Or, was simply made up.

Readers also called for certification of models that are being used for medical advice. One noted, “My doctor had to get a degree and be licensed. If he messes up bad enough, he can lose that license. There should be procedures for evaluating the quality of chatbot medical advice and for providing accountability for mistakes. Medical conversations with them aren’t like chatting with your neighbor about your problems.”

I hadn’t thought about it that way. It’s a useful idea that I may use when talking to patients who have been using the tools. The information they receive may or may not be better than what they would get over the fence from a neighbor, but it’s difficult to know.

One comment noted that since physicians are using these tools to do their jobs, it’s only fair that the patients have access as well. A follow-up comment noted that the writer “walked in on new residents Googling a patient’s symptoms.”

It makes one wonder how these tools will impact graduate medical education. Is the next generation of physicians building their internal knowledge and recall skills in the same way as previous generations? If they’re not, it’s going to be a rude shock the first time they have to live through a significant downtime or outage event.

It will also be interesting to see board exam pass rates change for physicians who trained in the post-AI era compared to those of us who didn’t have access to those tools.

What do you think about patients feeding their medical information into LLMs? Providers, under what circumstances would you recommend it? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 12/11/25

December 11, 2025 Dr. Jayne 1 Comment

The Association of American Medical Colleges has released new data showing that medical school enrollment has hit an all-time high. Total enrollment for 2025 is 100,723 students. It’s also the largest first-year class in history at 23,440 students.

A few stats stood out to me:

  • Incoming students range in age from 18 to 60.
  • The students logged 16.8 million hours of community service before applying, which averages 717 hours per person.
  • The cohort includes 163 military veterans.

Medical training can be a long, winding road, so congratulations to the entering class. For those on a semester schedule, go crush those finals.

I also saw an article about CMS contracting with Clear for identity verification as part of its quest to “kill the clipboard.” Eliminating manual and paper-based processes is a worthy goal, although technology alone never solves the problem. In my experience with process improvement work, the real challenge lies in understanding operations, culture, and history. Those often determine why a workflow looks the way it does.

My mammography center is a perfect example. It finally retired its wonky and duplicative paper questionnaire this year. I briefly celebrated not being handed a clipboard, but then was asked all the same questions verbally, regardless of whether or not the information was already in the chart.

The technician was rushed and misread my chart more than once. That led to a longer discussion than I cared to have while standing in a gown with half my body exposed.

I noted on my Press Ganey survey that these questions should be asked before patients disrobe. Whether anyone reads those comments is another story. Progress in healthcare tends to arrive as two steps forward, one step back.

image

As we coast toward year’s end, I’m watching healthcare IT projects nearly grind to a halt as team members take time off. Some absences were planned well in advance, especially for parents whose children are out of school, so those projects are only mildly affected. Others are chaotic as people realize, often too late, that their PTO is “use it or lose it.” The result is patchy staffing and sudden bottlenecks across teams.

I have worked under nearly every time-off model imaginable, from “unlimited” time away, subject to manager approval, to miserly accrual programs that make it hard for people to take more than a day or two off early in the year. Some employers allow a modest PTO bank before triggering “use it or lose it” rules. Others shut off rollover entirely. As a manager, I’ve always tried to explain the details to my team, including subtleties for remote employees who live in different states. I encourage people to spread their time off throughout the year unless they have a specific reason to save it.

Not everyone tracks their PTO or understands the fine print, and that can lead to scrambling at the end depending on organizational policies. I’m working on a multi-entity project in which time-off rules vary widely within the same metropolitan area. The most flexible arrangement allows employees four weeks of paid time off per year. Employees are required to take a minimum of two weeks away from the office, but can choose to have the other two paid out as wages. For those who don’t feel they need time away from work, that might be a good option.

A nearby organization uses what I call a “use it or else” policy. Employees cannot bank their PTO and cannot simply forfeit it. They must take all remaining days before December 31, even if doing so leaves co-workers hanging. Leadership announced the change over the summer, but many employees did not grasp the consequences, which is creating December chaos. Managers have been tasked to hold individual conversations to make sure everyone burns through their time. The official explanation is to avoid claims that workers aren’t allowed to use their time off. I’m sure there’s more to the story, but I don’t think the policy is working out as planned.

This year, I’m also seeing more people taking time off in December for health-related visits because they have already met their insurance deductibles. Hip and knee replacements seem to dominate. When I asked an orthopedic friend about it, she said her practice is running at full throttle to accommodate demand. The bigger problem, she said, is physical therapy. Local PT programs cannot keep pace with procedure volume, so her staff spends an extraordinary amount of time coordinating care to ensure patients are seen immediately after surgery. I don’t think that the folks who make healthcare policy and decide on our country’s patchwork of misaligned incentives understand these patient realities.

What is the atmosphere like in your workplace this holiday season? Are you racing to complete projects or taking a leisurely stroll towards the new year? Is it a ghost town due to last-minute PTO use? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 12/8/25

December 8, 2025 Dr. Jayne 3 Comments

Sometimes it’s hard to keep up with everything that is going on in healthcare IT. Regardless of how many unread newsletters and blog notifications are in my inbox, I know I can count on HIStalk to deliver the curated content that helps me identify the topics that I need to dig into and those that I can let slide for a while.

This week, I appreciated Mr. H posting a link to an article about the strategy that the Department of Health and Human Services (HHS) plans to use as it expands the role of AI in healthcare. The document is 20 pages long and reads like an ode to the wonders of AI, with less attention to the documented risks and benefits.

As someone who has spent a good chunk of her career doing process improvement work, where evidence and outcomes are key, and who is squarely under the influence of evidence-based practice where patients are concerned, I’m all about the details. It isn’t enough to just say that you have a cool technology that’s going to be revolutionary. We had enough of that with Theranos and the pharma bros. Now we are in an era where people want to see results and understand fully how care might be impacted and how patients will be protected.

There are five key pillars in the document: creating a governance structure that manages risk, designing a suite of AI resources for use across the department, empowering employees to use AI tools, funding programs to set standards for the use of AI in research and development, and incorporating AI in public health and patient care.

It sounds like the “empowering employees to use AI tools” piece is well underway since HHS has made ChatGPT available to all its employees. Based on my own experiences, I initially hoped that they were not using it to look up health-related content, because I’ve seen some wild inaccuracies over the last year even with non-controversial queries, such as asking it to summarize a movie plot.

Unfortunately, plenty of media reports say that HHS leaders are planning to use it to “deliver personalized, context-aware health guidance to patients by securely accessing and interpreting their medical records in real time.” Unsurprisingly, that idea raises concerns about having third-party vendors accessing patient medical records and how that data might be protected.

HHS has already given the protected health information, including birth dates and Social Security numbers, of Medicaid enrollees to the Immigration and Customs Enforcement department, which is cringeworthy for those of us who have had to sit through decades of HIPAA training courses. Although it appears that HHS will prioritize risk mitigation, the clinical experts who I have spoken with have serious concerns about the organization’s ability to prioritize patient protection over political requests.

Those of us who are following the evolution of vaccine policy in the US have seen a disregard for the scientific method and the removal of world-renowned experts from the process.  We have no reason to think that things will be different with AI. Given that we have decades of experience with vaccine efficacy and little experience with the impacts of AI, clinicians are understandably concerned. 

A  comment on the document noted that although safety measures are in place for individual patient information, no similar safeguards are listed for aggregated information that is being used by AI tools.

As I began to dig into it, I was surprised at how it differed from previous HHS publications over the last few decades. A glossy cover page was followed by a full-page photo of the secretary of Health and Human Services with a superimposed quote saying, “We are making HHS the template for the Utilization of AI.” When I’ve seen splashy graphics pages like that in the past, it’s been in the context of a major discovery or a noteworthy quote, but this just felt weird, for lack of a better word. 

The document continues with introductory letters from the deputy secretary and the HHS chief AI officer. In the first letter, HHS Deputy Secretary Jim O’Neill notes that “By guiding innovation toward patient-focused outcomes, this administration has the potential to deliver historic wins for the public – wins that lead to longer healthier lives.”

What does he think that all of us healthcare and health tech people have been doing for the last two decades? We’ve been patient-focused and outcomes-driven for a long time. Maybe he thinks it’s something new or unique to this leadership.

My favorite statement is in the second letter. HHS Chief Information Officer and Acting Chief AI Officer Clark Minor, states, “This paradigm shift will unleash a new era of well-being for a healthier America.” I was reading this in a room with a dozen family physicians, so I asked them, “What one thing do you think will unleash a new era of well-being for a healthier America?” None of the answers included AI.

What they did include were concepts such as universal healthcare, eliminating healthcare inequity, increasing social services that directly impact health, mitigating the impact of food deserts, investing in preschool and early childhood education, strengthening nutrition education in the public schools, and increasing the primary care workforce through additional residency training spots and low-interest loans for those who pursue careers in primary care.

The ensuing discussion made me wonder how much the folks at HHS are actually talking to those who are on the front lines of public health and primary care. What do they need to help promote health and prevent disease? What are their pain points? Which solutions have they tried, and can they share an inventory of what worked and what failed?

I’m certainly not part of the policy-making apparatus in the US, but I know how I solve workflow hospitals in a hospital. It doesn’t involve putting all of my eggs in the AI basket. We use a rigorous methodology to analyze dysfunction and to propose solutions, and it actually works.

This idea of assuming that AI will solve all our problems and then taking action based on that hypothesis makes me feel like we’re all part of a giant unregulated experiment that wouldn’t pass the basic rigors of a middle school science fair, let alone the Institutional Review Board of a research institution.

I have to admit that I haven’t finished reading the document yet, largely because the level of rhetoric present was giving me a headache. I also have a time-consuming personal project that I’m trying to complete, so I decided to switch gears. I’m eager to hear from anyone who has read the whole thing.

What are your thoughts on how expanded AI at HHS will impact the greater US healthcare ecosystem? Do you think AI is going to be a major driver of change, or is it just another distraction from the difficult and often messy work that needs to be done to improve the health of a large and diverse population? Leave a comment or email me. 

Email Dr. Jayne.

EPtalk by Dr. Jayne 12/4/25

December 4, 2025 Dr. Jayne 8 Comments

image 

This week’s encounter with Big Health System brought additional frustrations, along with a profound desire to sell them consulting services.

My appointment was scheduled with a nurse practitioner. It was supposed to be set up with a link to an imaging service. The plan was to see the provider first, then have the imaging, then go back to the provider.

When I stepped off the elevator, I had my choice of two check-in desks, one for the provider and one for the imaging department. Since my appointment was with the provider, I went there first. I was told that I needed to go to the imaging desk, where they checked me in and then sent me back to my original stop.

I had to check in again even though I had already done an online check-in. They sent me to a high-tech waiting room that has an electronic board that displays the names of providers who are in clinic that day.

I thought it was odd that my provider wasn’t on the board, but I’ve seen an electronic glitch or two in my career, so I didn’t give it much thought. I realized when I was taken back to the care area that they were going to take my vital signs in a centralized vital station that was right across from the checkout desk and also adjacent to the door. Everyone can see what is going on with everyone else.

Many of us Midwesterners dress in layers because of snow. I was glad that I was wearing a short-sleeved T-shirt under my sweater instead of a long-sleeved version. Otherwise, I guess I would have been wrestling half my body out of my shirt for all the world to see. At no point did the medical assistant ask if I had a suitable garment underneath before asking me to expose my arm, which would have been considerate from a patient experience standpoint.

Medication reconciliation was performed in the open in front of two other patients. That is a patient dissatisfier in my book.

I was taken back to an exam room. I was told to gown up and that “the physician assistant will be right in.” I asked if they had the right provider on the chart since I was scheduled to see a nurse practitioner who I had seen previously. They told me that she wasn’t there that day.

You can bet that as soon as the assistant stepped out, I checked the patient portal. Sure enough, the appointment was still listed as being with the nurse practitioner.

When the physician assistant arrived, she didn’t mention the scheduling change. She seemed surprised to hear that I was scheduled to see someone else. Knowing what I know about electronic health records, this shouldn’t have been a mystery to anyone, because schedules don’t just spontaneously morph. Regardless, with a day off work and a long commute to the center, we forged ahead.

Afterward, I was told to go to a check-out desk, where no one was present. I could see through a pass-through to the other side, where a staffer had her back to me. She didn’t acknowledge me when she finished with her patient. I walked through, only to find three people in a line that I couldn’t see from where I was told to wait.

I didn’t know if they were ahead of me or behind me in line, so I headed to the back. That side of the office was a mirror-image layout of where my intake occurred. Everyone could see and hear everyone else’s business as patients were brought in, had vitals taken and medication reconciliation performed, and were checked out.

One bright spot in the visit was that while I was waiting, one of the medical assistants walking by said, “Is that you Dr. J?” She turned out to be a former member of my team from the urgent care trenches. I enjoyed seeing the photos of her children that she had on the back of her badge and catching up while I waited.

Ultimately I made it to the check-out desk. The staffer was hidden behind dual monitors with no ability to make eye contact with the patient. She proceeded to schedule follow-up appointments without confirming whether or not they worked for my schedule. I suppose they assume everyone just drops everything for an appointment at that esteemed institution.

She also let me know that they were in the process of implementing “ticket scheduling” via the EHR. She said that I would receive a notice to schedule follow-up imaging, but advised me to ignore it because it would be automatically scheduled as a linked visit with my next provider appointment.

My read on that is that the EHR team doesn’t quite have everything as buttoned up as it needs to be. Or, whoever designed the scheduling protocol doesn’t understand that some clinics have linked imaging needs that aren’t suitable for patient self-scheduling.

I have multiple EHR certifications, I am knowledgeable about ticket scheduling, and I understood the context of being told to ignore the notice. Otherwise, I likely would have been confused to see the scheduling request in my patient portal, which I checked in the elevator to confirm the dates for the follow up.

Another bright spot occurred as I logged in. A popup asked me to set a communication preference about seeing my results before they are reviewed by the care team. I hadn’t seen that before, and it’s a great patient experience feature.

From there, I was off to the parking garage. One of the two exit gates was malfunctioning, causing dangerous reverse maneuvers and a total traffic jam that was preventing anyone from exiting their spaces.The clinic that I was in sees up to 100 patients a day, each floor has multiple clinics, and the building has multiple floors. I’m thinking that the parking situation might be a little undersized.

After driving home in a general state of frustration, I was glad to see a notification that my visit note was ready for review. Although I’m an avid reader and enjoy a good work of fiction, I don’t enjoy it when that fiction is masquerading as a medical record note. The list of errors included:

  • It listed an additional genetic mutation that I do not carry.
  • It instructed me to continue the medications that were supposed to have been inactivated during medication reconciliation.
  • Incorrect ages in the family history had been altered from what I entered during online check-in.
  • It documented history taking that wasn’t done.
  • A “comprehensive review of systems” was documented as negative, but they hadn’t asked me any review of systems questions.
  • It contained fictitious exam elements, including head, eye, ears, nose, throat, neck, extremity, and neurological findings.
  • It documented counseling that did not occur.
  • It listed shared decision-making that didn’t happen, which was based on the alleged counseling.
  • It documentation of answering my questions when I hadn’t asked any.

A note in the chart said that the contents of the visit were dictated using voice recognition software, but didn’t include any indication of AI usage. Actually, an ambient documentation solution might have yielded a better result since it probably wouldn’t hallucinate as many elements as the provider did.

It is possible that I have entered my curmudgeon era, but I simply don’t believe that this kind of provider behavior is appropriate. I also don’t think that patients deserve to be treated this way. When I hear people say that the US has the best healthcare system, I always think of situations like this and it makes my blood boil. What’s worse is that these things didn’t happen at a rural or underserved facility, but at a major academic medical center that has a top reputation.

While I was in the patient portal, I saw a message for a relative for whom I’m a proxy. It recommended that she have a mammogram despite being 97 years old and having had a mastectomy. I was happy to clear it out before she saw it, because she would have been incensed. Given the configurability of EHRs and individualization of care gaps, we shouldn’t be seeing things like that. Given that day’s experience, it was just one more layer of icing on the proverbial cake.

I know that healthcare providers are constantly being asked to do more with less. I live that situation on the regular. Plenty of corners can be cut when people are just trying to get through the day, but I draw the line at putting fraudulent documentation in a patient chart, or doing a bait-and-switch with providers who serve a vulnerable patient population.

I’ll be sending excerpts of this write-up to the powers that be, but I’m not at all confident that they will care.

Do you see these kinds of occurrences at your institution? If so, what are the solutions? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 12/1/25

December 1, 2025 Dr. Jayne 2 Comments

image

It’s been a bumpy couple of weeks. I have spent more time than I generally prefer in the patient, family, and caregiver role.

I hate to say that I saw mostly the bad and the ugly of the processes I have encountered, with barely any of the good. A solution is available for each of these issues, but when organizations fail to see problems with their processes, it’s unlikely that patients will see any change.

The first situation I ran into was with an elderly family member who was having an upcoming procedure. I’m essentially her healthcare proxy and receive her written communications. I also manage her phone calls because of her hearing impairment.

I received a voice mail a week prior to her procedure. It said that they had sent a financial responsibility letter and just wanted to make sure that I received it. The message went on to say that if I had indeed received it and didn’t have any questions, I didn’t need to call the office.

Although I hadn’t seen the letter yet, I looked at my Informed Delivery digest from the US Postal Service and saw that it would be in that day’s mail. I read the letter and had no questions, so I did as instructed and didn’t call back. I thought that was the end of it.

I had received written materials about the procedure six weeks before it was scheduled. They stated that I would receive a pre-registration call three days before the procedure. The call arrived as scheduled, but I was seeing patients, so I called back as soon as possible. I then learned that the department manages pre-registrations only between 1:00 p.m. and 4:00 p.m. and was now closed.

I called back the next day at 1:00 p.m. I was given the option to leave a voice mail, which wasn’t going to work because I was again seeing patients. I dutifully hit 0 to speak to an operator, who told me that the nurses are “still tied up with today’s patients because we’re running behind” and to “call back in a half hour or so.”

I gave it a full hour just to be safe. I was directed to voice mail again and was asked to leave a number where I could be reached from 1:00  to 3:00 p.m. I did so and didn’t hear back, so I called back at 3:45 since I knew that they close at 4:00. I was told “If they don’t reach you, they will just do her pre-registration when she gets here. But that’s not ideal, so we really need a number where we can reach you and have you answer.”

I received a call at 4:15 p.m. I just about broke my ankle trying to answer it, only to find that it was the financial office calling to see if I had any questions about the financial letter since they hadn’t heard from me. I let them know that the original message said not to call unless I had questions. The representative acted like she had no idea why the original message contained that information.

By this point, my read on the procedure center was that they have zero respect for people who have work or life situations where they can’t just drop everything and take a phone call during a narrow window of time. Also, that they don’t have their act together in making sure that the messages they leave are accurate. It didn’t make me feel respected as a potential patient or a caregiver.

I wasn’t seeing patients the day before the procedure, so I called in at 1:30 p.m. and finally reached a nurse. She went down a list of questions asking for information that was already on the chart. None of the questions was a curveball or tricky, so all of them could have been managed through an electronic check-in via the patient portal or through a secure messaging platform.

The nurse then read me all the pre-procedure instructions that had been mailed. That explains why the registration process takes so long and why the nurses aren’t easily available when patients call in as instructed.

In addition, the nurse paused periodically during our conversation to say goodbye to people in the office who were leaving. That seems unprofessional.

On procedure day, we arrived to find that the guarantor name on the insurance that was correct in the pre-registration conversation was now wrong. The check-in person also failed to collect the patient co-pay, which meant having an elderly person with a walker get up and down a couple of times rather than just once. The check-in desk was tall and didn’t have the option for a patient to sit, which was also a negative in my book.

The nurse was trying to ask rooming questions while we were walking to the dressing room. That isn’t ideal for an elderly person who is hard of hearing and who is focused on using her walker. I had to ask the nurse to stop asking questions until we were in a situation where she could directly address the patient without distractions.

Fortunately, the procedure went without a hitch. I returned her to her home and another family member tagged in.

Meanwhile, the second situation found me waiting for my own important test results. Their arrival was dragging into the holiday weekend. Physicians don’t always make the best patients, We are as anxious as anyone when we’re waiting to learn what is going on with our health.

I had been waiting a couple of days when I received a text telling me that a message was available in the patient portal. I was driving at the time, so I psyched myself up as I returned home and woke up my laptop so I could learn my fate.

It was a blast message from the surgeon’s office to let me know their office hours for the Thanksgiving holiday. Also, to remind me to call 911 if I had an immediate medical emergency.

I initially questioned whether this is a limitation of the patient portal. A quick chat with one of my favorite experts reassured me that the practice isn’t using the tool as designed. They could have used other options to convey the information that wouldn’t potentially trigger the hundreds of patients who are awaiting pathology results.

I know the EHR leaders at the institution in question. I wonder if they are aware how various departments are using the available tools and how deviation from published best practices can have a negative impact on their patients. This is the same practice that failed to notify patients that the office had moved, which caused quite a bit of hardship for patients. This workflow adds insult to injury.

Does your organization consider patient preferences and impact when creating patient-facing workflows? Do you leverage patient and family advisors to help you review new features? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 11/24/25

November 24, 2025 Dr. Jayne 2 Comments

I wrote earlier this month about an article that examined whether physicians think their peers who use AI are less competent. I brought this up in a recent conversation with other clinical informaticists to see what they had to say.

The responses were interesting. Although the general answer was “it depends,” opinions differed depending on the type of AI.

Many of the individuals who were part of the conversation are knee-deep in AI solutions as part of their work. They have a different level of understanding of exactly what constitutes AI compared to others who aren’t as engaged with the technology.

For most non-generative AI solutions, the group had a level of comfort that was commensurate with the time that the solutions have been in use. No one questioned the utility of AI in situations where pattern recognition is key, such as in the review of cervical cytology specimens or in diagnostic imaging. No concerns were voiced about AI-powered search tools that help clinicians dig into large data sets and provide verbatim answers.

Peers also raised no concerns about AI being used for natural language processing tasks, as long as the systems are non-generative. These can be used for analyzing the output of interviews or feedback sessions and have been used for years. One colleague specifically called out spam filters, challenging people who are afraid of AI to go a couple of days without one to see how they like it.

Another colleague mentioned a “smart buoy” that is located on a lake near his home. It determines if it’s safe to swim by monitoring temperature, wind, water pH, and turbidity while analyzing the correlation of those elements to bacterial counts.

As far as generative AI, people were generally positive about AI-assisted responses to patient portal messages, as long as the system requires a clinician to click the send button to indicate that they read the response and agree with it.

They were less confident about AI-assisted chart summarization tools because of the potential liability if data elements are missing or incorrect.

Some good discussion arose around the fact that it’s a trade-off since humans might miss or misinterpret something when reviewing bulky charts. Studies of this are not widely known in some clinician circles. Everyone agreed that we need better data that compares the performance of AI versus humans for specific tasks to better understand the risk-benefit equation.

The conversation drifted away from patient-facing generative AI to the tools that clinicians are using as they complete their Maintenance of Certification (MOC) activities. In response to the question of whether peers perceive physicians who use AI tools as less competent, one person noted, “If you’re not using AI to do your MOC, you’re crazy.” Maintenance of Certification questions often take the form of a block of questions that must be answered quarterly, or annually in some circumstances, and many physicians feel that it’s a make-work activity that doesn’t necessarily reflect the realities of their practice or expertise.

For example, in family medicine, the questions cover the whole scope of the specialty, even though most family physicians tailor their practices to include or exclude certain procedures or populations. The majority of us don’t provide obstetric care. Those who practice in student health clinics likely don’t see patients in the geriatric demographic. Some don’t see infants and young children. Some practice exclusively in emergency or urgent care settings.

Some who are in full-time clinical informatics had to give up clinical care due to lack of access to appropriate part-time opportunities. They are required to maintain their primary certification to retain board certification in clinical Informatics. That creates a significant burden for those who aren’t still seeing patients.

For those who have stopped seeing patients, MOC is a “check the box” activity. Most boards allow users to answer the questions in an open-book format, so using AI tools is a natural evolution. They help physicians get to their answers faster, just like they would in the clinical world, although in this case they’re helping reduce an administrative burden.

No one in the conversation had seen any specific prohibition on using AI tools to answer the questions. The only limitations are that you can’t discuss the questions with another person and you must answer them within the provided time limit.

All agreed that a pathway is needed for those who boarded in clinical informatics to allow their primary board certification to lapse after some amount of time. However, they also agreed such a change is unlikely before their anticipated retirement.

When asked specifically about using AI to create notes, such as with an ambient documentation solution, no one admitted to thinking badly about clinicians who do so. There was a general consensus that ambient documentation solutions are one of the few things that CMIOs have rolled out that generate thank you notes rather than emails of complaint and that the technology isn’t going away anytime soon. The concerns were more about the cost of the solution.

Some spirited discussion was raised about whether they will have a negative impact on physicians in training. Some firmly asserted that learning to write a good note is essential for physicians and that the note-writing process serves as a reasoning exercise. One residency program director noted that several applicants have asked him if residents are allowed to use the technology, so it may become a differentiator as candidates assess potential programs.

Anecdotally, I don’t think patients think worse of physicians who use AI solutions. A friend recently reached out with his experience. “I just got back from my annual visit with my PCP.  He’s using some new AI tool that transcribes the entire conversation during the visit, then cobbles the important parts together in the after-visit summary.  It was done cranking that out in the time it took him to listen to my lungs and look in my ears and down my throat, and everything was correct.  It even transcribed non-traditional words like ‘voluntold’ correctly.”

As a patient who has had inaccurate notes created by physicians who were in a hurry while charting, I would prefer AI if it meant not having imaginary exam elements added to my chart.

It’s always gratifying to meet with others who are doing work in my field and to learn how those from different institutions approach a problem differently or have different outcomes. I wish I could have those kinds of robust conversations more often, but I’ll have to settle for only having the opportunity a couple of times a year.

If you had a group of clinical informaticists captive for an hour, what topic would you want to see them discuss? Leave a comment or email me.

Email Dr. Jayne.

Text Ads


RECENT COMMENTS

  1. In fairness to the person on the thread the other day: Now THIS is politics on the blog. :)

  2. Thank you for your comments on Amazon. Agree 100%

  3. For the broader community, Neil Pappalardo was an important person within the community well beyond the impact he had on…

  4. Move your quotes to where they should be and it's no longer politics-in-the-blog, but instead a fact that's true at…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.