Home » Dr. Jayne » Recent Articles:

Curbside Consult with Dr. Jayne 8/18/25

August 18, 2025 Dr. Jayne 2 Comments

As a clinician, I often have difficult conversations with parents about how to reduce the amount of time that their children spend using screen-based devices every day. Many of the parents I encounter are unwilling to limit their children’s screen time because of a perception that children who don’t have devices will be “left behind” or potentially ostracized by their peers.

I see a fair number of folks who use devices to entertain their children rather than interacting with them, which I find sad. When I walk into an exam room and see a kid poking away at a tablet while their parent sits in a heads-down position with their own phone, it makes me wonder what happens when they are not at the physician office. Ultimately, kids become dependent on devices for interaction and this can be a problem when they reach school age, when teachers spend a good chunk of time policing phone-related behavior.

As of the start of this school year, more than half of US states have passed legislation or created policies regarding the use of cell phones in K-12 classrooms. These range from requirements that school districts create guidelines of their own to outright bans. Among the reasons for such bans, lawmakers cite the need to create a distraction-free learning environment, a desire to curtain social media use, and a hope that such strategies will have a positive influence on youth mental health.

My own local district had a well-researched plan that had been created after stakeholder listening sessions with students, parents, and teachers. It was pre-empted by a maneuver at the state level that is significantly stricter. When my district was creating its policy, it used its health advisory committee to comment on the potential risks and benefits of restricting cell phone use.

Physicians raised the issue of the use of cell phones for medical reasons, including students and faculty who use apps to manage health conditions like diabetes. It’s clear from looking at some of the state laws that these kinds of needs might not have been considered by legislators. Needless to say, people aren’t happy about it, and I’m sure there will be some settling in once school starts.

With that in mind, I ran across this article that covers the topic from the youth point of view. Although it mentions the fact that devices have addictive properties, it also digs into the ways in which childhood in the US is changing. It reviews a Harris Poll survey of 500 children ages 8-12, with the majority saying they had smartphones and half of the older members of the cohort saying that social media use was common in their peer groups.

One of my favorite quotes from the piece states that, “This digital technology has given kids access to virtual worlds, where they’re allowed to roam far more freely than in the real one.” As a proud member of Gen X who had the stereotypical “come home when the streetlights come on” childhood, this resonated with me. The article notes that many children haven’t so much as gone down a grocery store aisle alone and that a good number aren’t able to play unsupervised in their own yards.

The authors note that children expressed a desire to socialize in person with minimal supervision, but due to restrictions by their parents, they instead use their phones to socialize unsupervised. Of course, there are reasons that parents have become more restrictive with their children, including fear of injury or abduction, but one of the statistics mentioned in the article is that “a child would have to be outside unsupervised for, on average, 750,000 years before being snatched by a stranger.”

It goes on to say: “Without real-world freedom, children don’t get the chance to develop competence, confidence, and the ability to solve everyday problems. Indeed, independence and unsupervised play are associated with positive mental-health outcomes.”

The authors mention the creation of parenting networks where kids are encouraged to get together for unsupervised play and community organizations that are promoting screen-free time. The deeper I got into the article, the more I wondered what tech companies think about these efforts and whether they feel that such advocacy for unstructured device-free play might ever be a threat to their respective bottom lines.

I’ve been a volunteer in youth-serving organizations for over 20 years, and I would say that any threat wouldn’t be a serious one. To get kids to put down their phones, we would likely need to see parents doing it first. On second thought, though, maybe if there was a TikTok influencer that started telling parents it was cool to let their kids run around the neighborhood and dig holes in the yard as some of us did once upon a time, we might see a change.

I recently read the book “Klara and the Sun” by Kazuo Ishiguro. It’s a complex novel told from the point of view of Klara, who is an Artificial Friend purchased to serve as a companion to a child with a chronic illness. I won’t throw out any spoilers as to the nature of that illness, but it was an interesting read.

There are already enough ways that technology is impacting childhood, so I hope we don’t get to the point where life starts imitating the novel. On the other hand, there are some scenes in the book where the main human character is allowed to go outside to play with only the supervision of the Artificial Friend. It made me think a bit that if parents won’t let their kids explore the world alone, maybe there just might be a role for technology.

It will be interesting to see if there is any research published in the next couple of years with respect to these cell phone limitations and bans and whether they do have a positive impact on youth mental health. It’s estimated that mental health is impacting the US economy to the tune of $282 billion annually, so we can’t afford not to study how these interventions play out.

What do you think about the role of government in limiting the use of technology for individuals, whether they’re children or adults? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 8/14/25

August 14, 2025 Dr. Jayne 1 Comment

image 

Perplexity made an unsolicited offer to buy the Chrome browser from Google for $34.5 billion. Several people I spoke with agree with the Axios statement that it’s a great marketing play, but unlikely to actually be accepted by Google.

I’ve seen friends and colleagues move away from Google in the months since it added its AI overview feature. I’ve been back and forth with it. I had three significant hallucinations in the same day recently, and all were related to simple fact-based searches that shouldn’t have been problematic. Perplexity claims to have financing in place for the deal, but we’ll likely never know who agreed to back it.

image

JAMA Network Open has become one of my go-to journals for relevant research that addresses hot topics in healthcare information technology, but at a level that is accessible to more frontline clinicians than might be found in a journal that was targeted towards clinical informaticists. An article this week addressed a great question: “Can a patient portal message with either a physician-created video or an infographic with a physician photograph increase end-of-season influenza vaccination rates?” The study was done at UCLA Health with 22,000 patients from 21 practices. Neither approach raised overall vaccination rates, but both methods increased immunization rates for children and the video message option scored slightly higher.

There’s a lot of vaccine hesitancy in the US, and the Health and Human Service secretary’s recent approval of influenza recommendations received little press coverage. Here’s to hoping that messages from trusted physicians can help drive the needle.

Another feature in the same issue looked at whether physicians made more edits to hospital course summary documents that were generated by large language models (LLM) compared to those generated by physicians. The study was small, looking at only 100 inpatient admissions to the general medical service. The authors found that the percentage of LLM-generated summaries that required edits was smaller than the percentage of physician-generated summaries. The studies were evaluated against a quality standard, with the authors concluding that since the LLM-generated documents needed fewer edits, they were of higher quality than those created by physicians.

I found the study design particularly interesting on this one. The hospital course summaries were randomly assigned to one of 10 internal medicine residents. They had three minutes to review each pair of summaries and edit them for quality purposes. The output of those editing steps was then reviewed and scored for quality by an attending hospitalist physician.

The authors controlled for document length by using a “percentage edited” score and also looked at how much the meaning of the original summary was altered. The authors noted that while the LLM-generated summaries required less editing and may have been “comparably or more complete, concise, and cohesive” they also “contained more confabulations.” They noted that the artificial time constraints may have influenced the result. The study overall supports the idea that using LLMs to help complete this task could be of value.

OpenAI has been trumpeting the release of its GPT-5 model, saying it does a better job with medical questions than its predecessor, but users have been clamoring for an option to return to the previous model. The majority of complaints are around system speed and increased errors. Others took issue with the fact that the new model was rolled out without notice, leading CEO Sam Altman to admit that “suddenly deprecating old models that users depended on in their workflows was a mistake.”

Those of us who have been in the healthcare IT trenches for years understand the value of adequate change management and communication strategies, so I was surprised to learn that the company thought it would be no big deal to just hot-swap the models. If they’re looking for a change management sensei, I might know a girl. Another great quote from Altman: “the autoswitcher broke and was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber.” Something to ponder for all the folks relying on these technologies. Sounds like they may need a testing advisor as well.

One of my favorite colleagues from residency was in town the other day, doing college visits with one of her children. Her family is going through additional challenges in the college hunt as they evaluate the medical and support resources available to help students manage chronic health conditions in their first few months away from their families. My friend is a brilliant physician who has worked in environments from academic to military to rural health, so she has seen it all.

One of her concerns was the sheer number of communications she receives from her child’s care team: “Seriously, I think I got 15 reminders and a survey, I don’t want to have this kind of a relationship. I already replied, so why are we still having this conversation?” She’s worried that when her child is on her own and receiving all those reminders and messages that they will cause anxiety, which is certainly valid.

Props to health organizations who allow patients to customize reminders and communications. I personally just need one reminder three days out and that’s all. My dentist sends a reminder at 10 days, seven days, three days, one day, and then hourly until you arrive. They claim they can’t adjust it. I’m not sure I’m buying that, but I’m not well versed in dental platforms.

image

Dr. Nick van Terheyden reached out to let me know that the Lown Institute is accepting nominations for its annual Shkreli Awards, named after notorious “pharma bro” Martin Shkreli. The awards are given “to perpetrators of the ten most egregious examples of profiteering and dysfunction in healthcare.” Previous winners have done such things as: selling the body parts of the deceased without notifying the next of kin, defrauding Medicare by submitting claims on behalf of patients who never received services, and bankrupting community hospitals while living a lavish lifestyle.

What’s the most egregious thing you’ve seen lately in healthcare, regardless of whether it’s worthy of a Shkreli award? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 8/11/25

August 11, 2025 Dr. Jayne 1 Comment

image 

I was intrigued by Mr. H’s mention last week of the Mass General Brigham FaceAge AI tool that can estimate age from facial photos. Researchers found that patients with cancer appeared older than their stated age. The older they looked, the lower their odds of survival.

Although physicians have historically used visual assessments to predict potential outcomes, the tool uses face feature extraction to estimate a user’s biological age based on their photo. An article describing the tool was recently published in The Lancet Digital Health if you’re interested in all the details.

This item, as many things that Mr. H mentions, got me thinking. I found a couple of sites that host biological age calculators and completed the relevant surveys to get a couple of results. Some of them were more specific, asking for various lab values. Fortunately, I had results for all of the requested lab values and even some of the exercise performance measures that were included on one of the questionnaires. I also found a tool that is very similar to FaceAge, although not the exact one used in the study, and snapped my selfie.

The survey-based calculators estimated my biological age as anywhere from 4.6 to nine years below my actual age. The facial photo tool thought that I was more than 10 years younger. I suppose my liberal use of sunscreen and hats is paying off, since my facial wrinkles were scored as 2 out of a possible 100 points. I also did well on the “undereye” measure, although I admit that my photo was taken when I was well rested. I’m sure it would not have scored as well had it been taken after a shift in the emergency department.

I don’t look at a lot of high-resolution pictures of my face, and when I received my score report with a full-screen of my face right in front of me, I was somewhat surprised that you can still see some artifacts from years of wearing an N95 mask while seeing patients. I’m guessing that when I look in the mirror my brain somewhat processes that out, so it was a little startling.

I’d be interested to see how I would score on a medical-grade tool such as the one mentioned in the article. Although it was a fun exercise to complete the different surveys and see where I stand, none of the recommendations provided alongside the results of any of the tools were different from what I usually hear during my primary care preventive visits: keep moving, eat as healthy as possible, and watch out for the rogue genes you’re carrying around.

I would be interested to hear others’ experiences with similar tools and whether they have motivated you to do anything different from a lifestyle perspective.

image

Mr. H also recently mentioned efforts by NASA and Google to develop a proof-of-concept AI-powered “Crew Medical Officer Digital Assistant” (CMO-DA) to support astronauts on long space missions. As a Star Trek devotee, I couldn’t help but think of the Emergency Medical Hologram from “Star Trek: Voyager.”

The project is using Google Cloud’s Vertex AI environment and has been used to run three scenarios: an ankle injury, flank pain, and ear pain. The TechCrunch article noted that “a trio of physicians, one being an astronaut, graded the assistant’s performance across the initial evaluation, history-taking, clinical reasoning, and treatment.” A particular astronaut/physician came to mind when I read that, and if there’s a hologram to be created, I’m sure other space fangirls out there would find him an acceptable model.

The reviewers found the model to have a 74% likelihood of correctness for the flank pain scenario, 80% for ear pain, and 88% for the ankle injury. I’m not sure what the numbers are like for human physicians in aggregate, but I’m fairly certain I’ve had a higher accuracy rate for those conditions since they’re common in the urgent and emergency care space. However, NASA notes that they hope to tune the model to be “situationally aware” for space-specific elements, including microgravity. I would hazard a guess that most physicians, except for those with aerospace certifications, don’t have a lot of knowledge on that or other extraterrestrial factors.

The article links out to a NASA slide deck. Since I do love a good NASA presentation I had to check it out. I was excited to see that there is a set of “NASA Trustworthy AI Principles” that address some key factors that are sometimes lacking in the systems I encounter. The principles address accountable management of AI systems, privacy, safety, and the importance of having humans in the loop to “monitor and guide machine learning processes.” They note that “AI system risk tradeoffs must be considered when determining benefit of use.” I see a lot of organizations choosing AI solutions just for the sake of “doing AI” and not really considering the impacts of those systems, so that one in particular resonated with me.

Another principle that resonated with this former bioethics student was that of beneficence, specifically that trustworthy AI should be inclusive, advance equity, and protect privacy while minimizing biases and supporting “the wellbeing of the environment and persons present and future.” Prevention of bias and discrimination, prevention of covert manipulation, and scientific rigor are also addressed in the principles as is the idea that there must be transparency in “design, development, deployment, and functioning, especially regarding personal data use.” I wish there were more organizations out there willing to adopt a set of AI principles like this, but given the commercial nature of most AI efforts, I can understand why these ideals might be pushed to the side.

In addition to the CMO-DA project, three other projects are in the works: a Clinical Finding Form (CliFF), Mission Control Central (MCC) Flight Surgeon Emergency Procedures, and a collaboration with UpToDate. I love a catchy acronym and “CliFF” certainly fits the bill.

I recently finished the novel ”Atmosphere” by Taylor Jenkins Reid . If you are curious about the emergency procedures that a mission control flight surgeon might need to have at their fingertips, the book does not disappoint.

The deck goes on to discuss the evolution of Large Language Models, retrieval-augmented generation, and prompt engineering within the context of the greater NASA project. The deck specifically notes that any solution must be on-premise, which is particularly true when you experience the communications blackouts that are inherent in space travel.

There are more details in the deck about the specific AI approach and the scenarios. I particularly enjoyed learning about “abdominal palpation in microgravity” and the need to make sure that the patient is secured to the examination table to prevent floating away. I also learned that “due to the microgravity environment, the patient’s abdominal contents may shift,” which got me wondering exactly how many organs were subject to shifting since many of them are fairly well-anchored by blood vessels and other not-so-stretchy structures.

The deck listed the three physician personas who scored the scenarios. Based on physician specialty, it’s likely that my favorite astronaut wasn’t one of them, but I was happy to see that an obstetrician / gynecologist was included.

Apparently there was a live demonstration of the CMO-DA at the meeting for which the presentation deck was created, so if anyone has connections at NASA, I know of at least one clinical informaticist that would love to see it. I’ll definitely be setting up some online alerts for some of these topics and following closely as the tools evolve.

Did you ever dream of being an astronaut, and what ultimately sidelined you from that career? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 8/7/25

August 7, 2025 Dr. Jayne 1 Comment

One of the hot topics around the virtual physician lounge this week was the opening of the Alice L. Walton School of Medicine in Bentonville, Arkansas. The school is named after its founder, who is an heir to the Walmart fortune.

The initial class of 48 students will be trained in a curriculum that is based on preventive care and a whole-health philosophy. The school is located on Walton family property and borders the Crystal Bridges Museum of American Art, which should provide an excellent diversion when students need time away from studying. Apparently the curriculum also includes a course that incorporates art appreciation as a way of encouraging observational skills and empathy.

Students are expected to perform community service as a way of better understanding those in their care. Other ways the curriculum differs from the standard include a focus on nutrition education, including cooking classes with teach-back sessions to patients, and time spent gardening and working on a teaching farm.

Tuition for the first five graduating classes will be covered by Mrs. Walton, who hopes that graduates will consider practicing in underserved areas. There are certainly some opportunities for service in Arkansas, which has some of the poorest health outcomes in the US.

The lure of free tuition is strong, but students are taking a bit of a gamble attending a school that does not yet have a track record for residency placements or a broad alumni network. Still, the school received over 2,000 applications for the class. Best wishes to these new students, and I look forward to seeing how the curriculum is implemented as the inaugural class progresses.

Another hot topic was a recent JAMA op-ed piece that is titled “When Patients Arrive With Answers.” It covers the evolution from patients arriving with newspaper clippings to bringing in printed results of internet searches and now arriving with AI-generated materials to discuss with their physicians.

One of my colleagues focused on a line in the piece about tools like ChatGPT: “Their confidence implies confidence.” This led to a discussion hallucinations that we have encountered using AI solutions, even in situations where simple fact-based questions are being posed. The author notes that they are now “explaining concepts like overdiagnosis, false positives, or other risks of unnecessary testing.” 

That comment resonated with my colleagues. One noted that she feels that AI is worsening the burnout problem in her primary care practice. She must regularly defend her recommendations against AI-generated suggestions, as well as misinformation that is being provided by TikTok influencers. The author recognizes this, and notes that explaining evidence-based recommendations in contrast with patient requests isn’t a new phenomenon and encourages physicians to “meet them with patience and curiosity.” Given the tight schedules that most physicians face, I’m not sure that’s realistic.

Keeping with the theme of AI, I enjoyed this JAMA Editor’s Note on “Can AI Improve the Cost-Effectiveness of 3D Total-Body Photography?” As someone who has had entirely too many skin biopsies, this immediately caught my attention.

The authors specifically address the idea of photography for patients who are at high risk for melanoma, citing a recent randomized clinical trial published in JAMA Dermatology. The study found that although the intervention resulted in more biopsies, it didn’t increase the number of melanomas that were identified.

Another study that was also published in JAMA Dermatology looked specifically at whether 3D total-body photography is cost-effective. It found that it wasn’t, but posed the idea that with AI enhancements, it could become more financially feasible. For patients who need regular monitoring, however, I guess we’ll just have to stick with “usual care.”

I used a non-medical AI tool this week to help address a question that a family friend posed. When you’re a primary care physician, everyone assumes you know about all facets of medicine. I’m constantly getting questions about radiology reports or lab results because people “don’t want to bother the doctor.” I still find it strange that they’d rather expose their protected health information to someone they don’t know well, who is merely the daughter of a friend, but that’s often how it goes.

I was curious what the patient would have seen had they decided to just use Google or any of the AI tools out there. In this case, both Google and Copilot did a great job explaining “what does pleural based opacity” mean, giving answers that were similar to my own.

The primary difference between the human answer and the AI generated one was in the follow up. Where I said that the patient should follow up with the ordering physician to understand what the term means in context of their clinical picture, both sources recommended further investigation, which most patients would interpret as needing additional testing.

I wasn’t as patient with another person who reached out for medical advice. Someone who I hadn’t seen since high school decided it was a great time to message me via Facebook and ask about various medications versus injections versus surgery for back pain. I have to admit that I took the easy way out by saying “so many factors play into the choice of treatments and it really depends on the patient,” which was as empathetic as I could get at the time.

A few days later, I plugged it into Google to see what it would provide. It did an exhaustive review of the different options and closed with this: “Important note: The choice of treatment depends on the specific nature and severity of the herniated disc, as well as individual patient factors and preferences. It’s crucial to consult with a doctor or pain specialist to determine the most appropriate course of action for your situation.” At least in this situation, I agree 100% with the Google. 

Are you a clinician who has to field medical questions from people who are not your patients? Have you considered outsourcing your advice to AI, especially if it’s outside of your typical scope of practice? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 8/4/25

August 4, 2025 Dr. Jayne Comments Off on Curbside Consult with Dr. Jayne 8/4/25

I recently had the opportunity to spend some time with a computer engineering student who was looking to learn about healthcare information technology. Specifically, he was curious about the role that clinicians play in the field.

We had some great conversations and the experience was very enjoyable, in large part because few of the discussions centered on AI. He has a particular interest in cybersecurity, so our initial conversations had some fairly deep coverage of the topic. He was interested in learning more about how hospitals and health systems handle the backup and recovery process, particularly when a security incident might have occurred. Based on a couple of his comments, I think I surprised him by being able to provide a deeper discussion of the topic than he expected to hear from a physician. 

It was a good opportunity to explain the field of clinical informatics and how many types of roles we fill. I’m unusual in how much experience I’ve had with infrastructure, architecture, and the nuts and bolts of interoperability. I’ve been fortunate to work with some great engineering and development teams throughout my career, picking up some interesting and unique knowledge along the way. I never thought I’d be able to have conversations about Citrix load balancing or be able to explain the role of transaction log shipping as part of a disaster recovery solution, but you never know where your career is going to take you.

In large part, I learned about those things not because I necessarily wanted to, but because I had to. The first EHR project I was involved in did not go well. A lot of IT folks were techsplaining, which didn’t help me solve the problems that were interfering with my ability to deliver high-quality care.

Although I think that many of them were just talking in their everyday language — similar to how physicians talk among themselves, without trying to leave me out of the conversation — I experienced more than one situation where an IT staff member was treating me in a way that was equivalent to patting me on the head and saying, “Don’t worry about this, little lady.”

After one of those encounters, I decided that I would need to hold my own, so I started doing a lot of reading. I figured if I could learn biochemistry and the complexities of the human nervous system I could certainly learn some of this new language and how all the technology was supposed to be working compared to how it was actually performing in the field.

Thinking about how information access has changed, learning about those domains would be a lot easier now than back in the days when only 5% of physicians were using electronic health records. You couldn’t just pop into your web browser and find articles about implementing systems in hospitals, because we were just getting started. Meaningful Use wasn’t yet a thing, and those of us that were trying to bring up systems were doing it because we thought we could revolutionize patient care, not because someone was making us do it.

Hospitals had electronic laboratory and monitoring systems and of course billing, but computerized order entry wasn’t even on the radar of physicians. Heck, we couldn’t even print patient labels from the computer system at one of my hospitals. They were still using Addressograph cards to add patient information to the paper used for writing daily progress notes.

We went down the internet rabbit hole as I was trying to explain that piece of equipment to my student. I wish I had a picture of the look on his face when I explained how a similar technology was once used to process credit cards at businesses. Apparently you can buy a vintage credit card imprinter machine via various online resale sites, for those of you who miss the very specific noise made when the charge card was pressed under the carbon paper.

That led to a good conversation around the idea that 40 years ago, we had no frame of reference for the technologies that we would be using today. No one would have guessed that we could simply tap our credit cards on a machine to pay, let alone load that credit card information into a palm-sized phone and use it to pay as well. I can’t even imagine how things will work in 40 years, and I hope that when he’s later in his career, he will have the experience of being able to share stories of how things used to be with someone who is just starting out.

We also had some interesting conversations about healthcare in general, and particularly around healthcare finance and how the revenue cycle works. In my opinion, it’s one of the messier aspects of the US healthcare system, and opportunities exist to make it better.

We had a good conversation around how claim adjudication works and why it’s rare in our area to see an organization that is doing real-time claims adjudication. Some of the practices that I go to don’t even collect your co-pay during the office visit, so I can’t imagine what a big shock it would be to use a system like that.

I also ended up teaching him how to read an Explanation of Benefits statement, which I think was an eye-opener, especially for someone who doesn’t have a lot of patient-side experience in his relatively brief adulthood.

I enjoyed learning about some of the non-healthcare work that the engineering student has done as he works towards his degree. Also, the supplemental activities that are available to students that didn’t exist when I was in school. His school has competitive rocketry, drone, and Mars rover teams where students can apply what they’re learning as early as the first semester. We had to wait until our junior year to really have experiential learning opportunities and they certainly weren’t as cool as any of those.

Although I tried to bring healthcare and healthcare technology to life, I’m not sure it’s going to be as cool as some of the other career options that will undoubtedly be available to him, especially if he’s leaning towards cybersecurity and cryptography. He’ll be back next week, and I plan to cover topics including robotics, prosthetics, and human-computer interaction. I might still be able to convince him that healthcare can be cool.

What do you think are the coolest technologies we’re using in healthcare, beyond AI? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 7/31/25

July 31, 2025 Dr. Jayne Comments Off on EPtalk by Dr. Jayne 7/31/25

image

There was some good discussion around the virtual physician lounge this week as one of my colleagues shared a recent article in Nature Scientific Reports about using AI to diagnose autism spectrum disorder and attention-deficit / hyperactivity disorder in children and adolescents.

Diagnosing these conditions can be challenging for primary care physicians who have limited time with patients and for parents who might wait months for their child to receive an appropriate assessment. In my city, the wait for a non-urgent assessment by a child and adolescent psychiatrist can be up to a year. Delayed diagnosis leads to delays in care.

The study still needs refinement, but preliminary results show that a sensor-based tool can suggest a diagnosis in under 15 minutes with up to 70% accuracy. The researchers began with a hypothesis that diagnostic clues can be identified in patients’ movements that are not perceptible to human observers, but can be detected by high-definition sensors. The authors catalogued movement among neurotypical subjects and those with neurodevelopmental disorders to inform a deep learning model. The movements were tracked by having the subjects wear a sensor-embedded glove while interacting with a target on a touch screen. The sensors collected movement variables such as pitch, yaw, and roll as well as linear acceleration and angular velocity.

I admit I was having flashbacks to some of my physics coursework as I read the paper, but it still kept my attention. The authors plan to continue validating the model in other settings, such as schools and clinics, and to validate it over time. The study has some limitations, namely its size. It had only 109 participants and some of those had to be excluded from the final analysis for reasons including inability to complete the exercise, motor disabilities, or problems with the sensors.

The participants were also a bit older than the typical age when diagnosis occurs, which could limit its broad applicability. Still, the ability to detect condition-related markers in an objective way, as opposed to having to use behavioral observations, would be a big step forward, especially if the study can be powered to significantly increase the sensitivity and specificity of the model.

image

Quite a bit of conversation occurred around a recent meta-analysis that looked at the number of steps adults should take in a day. Most of the patient-facing clinicians I know don’t have trouble getting their steps in on regular workdays, although some specialties have a fair amount of seated time, such as anesthesiology and pathology. A couple of folks I know are obsessed with getting a minimum of 10,000 steps each day, however, which is less important according to the recent article.

The authors looked at studies published since 2014 and concluded that individuals who got between 5,000 and 7,000 steps per day had a significant risk reduction for cardiovascular disease, dementia, and falls as well as all-cause mortality.

That’s not to say there’s a downside to getting 10,000 steps a day, but no clear evidence supports that specific number across the board. That’s good news for those of us on the IT side of the house who might spend less time ambulating than we’d like.

image

While we’re at it with our virtual Journal Club, another study that caught my eye this week looked at the benefits of the four-day work week. The authors looked at 141 companies that allowed employees to reduce workdays without a corresponding change in pay and found that the practice decreased employee fatigue, reduced burnout, increased job satisfaction, and improved efficiency compared to 12 control companies.

The process wasn’t as simple as just trimming days, however. Companies had to commit to some level of reorganization beforehand, focusing on efforts to build efficiency and collaboration prior to embarking on the six-month trial. There were 2,896 employees involved across companies in the US, UK, Australia, Canada, Ireland, and New Zealand.

I’ve worked with a couple of vendors who have instituted this practice. Their employees seem to be satisfied with the practice. I enjoyed living vicariously through the account reps who used their long weekends for camping and backpacking.

One of the companies sold a patient-facing technology with 24×7 support, so extra coordination was involved to ensure that those workers had adequate days off even though the rest of the company was closed on Fridays. I’ve also seen some healthcare organizations do this with their management teams, although it doesn’t seem that big of a stretch when the organizations already had hundreds of workers whose routine schedules involved three 12-hour shifts and leaders were already used to providing management coverage 24×7.

From Yes, Chef: “Re: this week’s Morbidity and Mortality Weekly Report. I would have loved to have been part of the public health informatics team crunching that data.” The report details an incident that involved a pizza restaurant not far from Madison, WI last October. Apparently 85 people experienced THC intoxication after eating from the restaurant, which shared kitchen space with a state-licensed vendor that produces THC edibles. When the pizza makers ran out of oil, they used some from the shared kitchen, unknowingly putting some “special sauce” into their dough. Public health informatics is one of my favorite subdisciplines of clinical informatics, so here’s a shout-out to all the disease detectives out there who solve mysteries like this one every day.

image

I’m trying to slow the volume of emails hitting my inbox, and HLTH seems to be one of the biggest offenders. The organization has been averaging three emails a day over the last month and attempting to manage my preferences hasn’t seemed to make a difference. Before clicking delete, I looked at the registration options for this year’s conference. It looks like it’s $2,995 and goes up to $4,100 next week.

I get that it’s an all-inclusive registration and includes two meals on most days, but it’s still a large amount to ask companies to spend on top of travel and lodging. For the average consulting CMIO, unless I can get some good meetings scheduled, the price isn’t worth it. Of course, media and influencers can apply to attend for free, but that’s hard to do when one is an anonymous blogger.

If you’re experiencing an overloaded inbox, who is the biggest offender? Have you found unsubscribing helpful or do you have other strategies to share? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 7/28/25

July 28, 2025 Dr. Jayne 5 Comments

image

Several people have asked for my opinion about Bee, which Amazon is acquiring. The company makes the Pioneer, a wearable that records and transcribes your day. It captures not only what you say, but also the conversations of those around you. It tries to entice users by providing summaries of the day, reminders, and other suggestions from within its companion app.

Unsurprisingly, the solution also requests permission to all of the user’s info, including email, contacts, location services, reminders, photos, and calendars in an attempt to create “insights” as well as a history of the user’s activities.

The device costs $50, which can be avoided by using the Apple Watch app, and then a $19 per month subscription on top of that. The solution uses a mix of large language models to operate, including ChatGPT and Gemini.

A quick visit to my favorite search engine pulled up a number of pages that mention the device. Some reports say that it isn’t able to differentiate between the wearer’s conversations and what they were watching on TV or listening to on the radio.

I wasn’t surprised at all to hear that significant privacy concerns have been expressed. The company keeps transcripts of user data, although it doesn’t store the audio. I laughed out loud when I read quote from an Amazon spokesperson who said that Amazon “cares deeply” about user privacy and plans to give users more control over how their data is used after acquiring the startup.

Along with anyone who has had to go through multiple levels of annoying menus (that seem to change regularly) while trying to rein in their Alexa device, I’m not buying it. Although Amazon claims to not sell customer data to third parties, they have plenty of uses for it in-house. Anyone who visits Amazon can see how their targeted marking winds up in different places.

Putting on my end user hat, I have to say this is one of the more ridiculous tools, offerings, or solutions that I’ve seen. However, there must be a huge number of people who disagree with me, because if it weren’t a potential moneymaker, I don’t think Amazon would be acquiring it.

What if the user is located in a two-party consent state and is now recording conversations without notifying the other parties? I found a funny video about the device, where Wall Street Journal reporter Joanna Stern said it “turns you into a walking wiretap.” She also asked the device to do an analysis of her use of swear words over the course of the month and shared her statistics in a funny recap.

The company’s website plays a pretty mean game of buzzword bingo. Examples: “turns your moments into meaning”and ”earns and grows with you” as it “sits quietly in the background, learning your patterns, preferences and relationships over time, building a deeper understanding of your world without demanding your attention.”

The website shows an example of a user and their team “discussing ideas for the next product release.” That’s right, you can wear it to the workplace and have it collect all the company’s intellectual property over the course of the business day. I’m betting that most company’s employee handbooks don’t have language that addresses this. If I were in the corporate compliance department of anywhere with employees, I’d be sending out a memo ASAP.

The website also gives examples of how the device and its app can dispense parenting advice and manage issues such as “dealing with resistance to potty training and handling emotional outbursts.” I’m sure that pediatricians and family physicians will be thrilled to review the device’s recommendations at well-child visits (sarcasm intended) along with everything else they need to cover.

The website also had the device’s terms and conditions, which were 10 printed pages long. Here are some of my favorite highlights:

  • By accessing the device, you agree that you have read, understood, and agree to be bound by all the terms, which can be unilaterally altered at any time and for any reason. The company will alert users simply by updating the “last updated” date on the terms page, and users “waive any right to receive specific notice of each such change” and accept the “responsibility to periodically review these Legal Terms to stay informed of the updates.”
  • Bee specifically calls out in the second paragraph that it offers no HIPAA protection.
  • The user accepts the responsibility to be compliant with any applicable laws or regulations and agrees to terms regarding the collection of data with respect to minors.
  • Users are prevented from disparaging the company or its services.
  • Users agree not to use information obtained “in order to harass, abuse, or harm another person.”
  • Users agree not to “harass, annoy, intimidate, or threaten any of our employees or agents engaged in providing any portion of the Services to you.” The use of the word “annoy” caught my attention, since I can’t imagine an employee engaged in customer service or support who doesn’t find at least some percentage of the users with whom they interact to be annoying.

I found some user comments on Reddit and the following phrases were some of my favorites:

  • I made the mistake of using the app to retrain my voice, and since then it doesn’t think I’m EVER talking, everything I say is recorded as “unknown”. So instead of thinking other people were me, now I’m not even me.
  • While the little convo summaries are often amusing, now I am trying to figure out how this thing is supposed to be useful.
  • Users accused the system of “trying to gaslight me.” Some of us get enough of that in our daily lives, so we don’t need an AI tool to contribute as well.

The website says the device is sold out, although the company is taking back orders and plans to ship new units by September. That means either their marketing team is trying to create some FOMO (fear of missing out) or that lots of people are ready to take the plunge, privacy be damned.

What do you think about the Bee Pioneer? Would you consider wearing one? Are you taking steps to specifically ban it and similar devices and applications from your workplace? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 7/24/25

July 24, 2025 Dr. Jayne Comments Off on EPtalk by Dr. Jayne 7/24/25

image

JAMA Network Open recently published an Original Investigation titled “Patient Care Technology Disruptions Associated With the CrowdStrike Outage.” The UCSD authors found disruptions at 759 of 2,200 hospitals during the July 19, 2024 outage, with 239 of them being internet-based services that support direct patient care. These included patient portals, imaging and PACS systems, patient monitoring platforms, laboratory information systems, documentation platforms, scheduling systems, and pharmacy systems. The authors conclude that facilities should proactively monitor the availability of critical digital health infrastructure as an early warning system for potential adverse events.

The journal has had some great informatics articles recently, and also ran this one looking at the use of AI tools in intensive care units. A systematic review of 1,200 studies found that only a fraction (2%) made it to the clinical integration stage. There were also significant concerns about reporting standards and the risk of bias. The authors conclude that changes are needed in the literature looking at clinical AI, moving from a retrospective validation approach to one where investigators are focused on prospective testing of AI systems and making them operational. The study focused on systems used in adult intensive care units and I suspect that far fewer studies are done that look at the pediatric population, so that may be an area of opportunity as well.

From Savannah Banana: “Re: stadium naming rights. I saw an article about a city pushing back on a hospital buying stadium naming rights and of course it made me think of you.” Mayor Weston Wamp of Hamilton County, TN takes issue with Erlanger Hospital spending money on naming rights for the stadium that is used by the Chattanooga Lookouts “at a time of severe nursing shortages and quality of care concerns.” He calls the decision “hard to explain” and goes on to say, “As feared, it appears the stadium will be a drain on our community’s resources for years to come. Before I was elected, the Lookouts convinced city leaders to give the team all revenue from naming rights on this publicly owned facility. Now, in a sad twist, our local safety net hospital will be footing the bill for the Lookouts $1 million annual lease payment.”

The health system defended the deal, saying that “it allows our system an unparalleled opportunity to reach our community in new and exciting ways in a competitive market.” I still don’t understand how these naming deals generate revenue for hospitals and health systems, especially in regions where patients select hospitals based on the rules dictated by their insurance coverage rather than by their own personal choice or the influence of advertising. If some of our readers have insight, feel free to educate me.

Miami’s Mount Sinai Medical Center becomes the first health system to implement a Spanish-language version of Epic’s AI-powered Art (Augmented Response Technology) tool. Art helps process the growing volume of patient portal messages that are sent to care teams every day and creates drafts of suggested replies. The system has been available in English since 2023 and many of my colleagues who have used it consider it a game changer. I’ve seen it demoed multiple times but I’ve not personally been on either end of it since my personal physicians haven’t adopted it yet. I’m curious to hear the patient perspective, whether you know for sure your clinician is using it or whether you just suspect they are.

image

People are talking about Doximity’s free GPT. I tried it once awhile back, but I can’t remember if I was impressed by it. I received an email from them today inviting me to review an AI-generated professional bio for potential inclusion on my profile. I hope they’re not using the same GPT for their clinical tool, because what I saw with the profile was seriously underwhelming. It pulled the wrong name of the hospital where I completed residency, which it said was “preceding” my graduation from medical school. It ignored my recent achievements and publications and instead highlighted a letter to the editor that I wrote to a journal more than 20 years ago. I clicked the “don’t add” button on the entire thing. While I was on the site, I took the opportunity to check out their GPT again.

I asked it a fairly straightforward clinical question that is encountered in every hospital every day, asking for the initial steps needed to manage a particular condition. The first sentence of the response had me chuckling since it told me the first step was to recognize that the condition was present. Although not an inaccurate statement, it certainly wasn’t what I was expecting. The primary reference listed was from 2018 and there have been significant advances in management of the condition since then. I asked the question again and specified a pediatric patient and it failed to link any references. Based on those factors, I can say that I’m officially underwhelmed.

image

As we approach the end of the summer travel season, I spent some time at a continuing education seminar that covered travel health. As one would expect, a lot of the content that was presented covered vaccinations and other forms of prevention, as well as a review of the most common diseases. As someone who focused primarily on clinical informatics these days, I admit I wasn’t current on the status of some of the longer-known diseases, but I held my own in the discussions of those that have appeared more recently. Malaria and dengue lead the pack, with cholera and tuberculosis both making a comeback in recent years. Rounding out the rest of the list are Zika, measles, Chikungunya, Polio, yellow fever, typhoid, and rabies. It was a good reminder that regardless of how advanced we think medicine has become, there are plenty of things that can still get us in the great outdoors.

Have you ever had a travel medicine consultation prior to a trip? Did you find it valuable? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 7/21/25

July 21, 2025 Dr. Jayne 1 Comment

Mr. H is running a poll that asks, “Is it ethical for doctors to prescribe the drugs of their pharma sponsors to people who seek specific treatments?” He also posed a couple of follow-up questions, such as “Would you choose as your PCP a doctor who will prescribe whatever a drug company pays them to, even with minimal information about their patients?” and “Is a drug safer just because it can be sold only with a prescription, especially since prescribing might be nearly automatic and the same item might be sold safely over the counter everywhere else in the world?”

I like the response choices that Mr. H included in the poll. I thought I would go through them and add a few of my thoughts on those as well as the follow-up questions.

No. The patient should see their regular doctor. As a primary care physician, I agree with this one in my heart. Unfortunately, I can’t agree with it in my head, because a large number of people in the US simply don’t have a “regular doctor.”

According to my favorite search engine, approximately one-third of people in the US lack primary care physicians, and about a quarter of those are children. Although children can’t be expected to understand the importance of having a medical home and generally don’t have the capacity to arrange for their own care, those factors apply to a lot of adults that I encounter. Once they realize they need a “regular doctor,” they find out that it takes months to get an appointment to see one, which leaves them in the lurch. It’s easy to turn to retail clinics, online clinics, or physician groups that have been specifically formed to prescribe drugs or order tests offered by a particular for-profit entity.

No, unless they review the patient’s medical records. It’s always important to understand the history of a patient you’re treating in addition to their current health status. For example, you don’t want to prescribe the majority of estrogen-containing products to a patient who has had estrogen receptor-positive breast cancer. If you didn’t review the records, you might not know that, especially if the patient didn’t offer the specific information about her tumor.

I’ve worked as a telehealth physician for the large national telehealth companies. Most of the time in those situations, you don’t have the patient’s records. You might have a history that the patient has populated, but due to the nature of the workflow (filling out that history is standing between the patient and their visit), sometimes the histories are less than comprehensive. Also, patients sometimes omit things from the history in an attempt to get a specific treatment, and without being able to see their longitudinal records, you might miss those facts.

No. It drives costs up for everyone. This response is currently scoring rather low, but it’s an important one. Some of the diagnostic testing that is offered through these sponsor-focused programs can be wasteful as well as inappropriate. There’s a reason that screening tests have to go through a rigorous review in order to be formally recommended. Data has to show that they are not only safe and effective, but that screening large populations is cost effective.

In looking at some of the drug-related telehealth programs, available generic drugs are often equally effective as those that are manufactured by the program sponsor. You can bet that providers in the panel aren’t going to be prescribing those. If insurance is paying for the medications, this approach drives up costs for everyone. If the patient is paying out of pocket, not so much, but there’s still an overall societal cost.

No. It’s a prescriber lawsuit waiting to happen. I’m a little on the fence about this one. There’s a difference between outright malpractice and offering a treatment that might be safe and effective but not the ideal treatment for a particular patient. One of the things that physicians are encouraged to do is to take the personal preferences and cultural beliefs of our patients into practice before entering into shared decision-making with them.

If that sounds like a mouthful, that’s because it is. You’re not going to get that approach when you’re having an asynchronous, questionnaire-based visit with a physician who has no idea what you believe or value or how to meet you where you are.

Yes. It’s legal and what patients want. I’m going to channel millions of parents of teenagers here. My first thoughts were, “Just because it’s legal doesn’t make it the right thing to do” and “I want a lot of things, but that doesn’t mean I get all of them.”

I’ve treated many patients who think they want something. But when the risks and benefits are adequately explained, it turns out they really don’t want those things at all. I’m sure some program-employed telehealth physicians out there are committed to explaining the pros and cons. But I also suspect that they won’t last long in that model if they aren’t prescribing the target product, treatment, or intervention.

Of course, this happens during in-person visits as well. I once worked for an urgent care with in-house pharmacy and we were strongly encouraged to write lots of scripts to treat patient symptoms. Some of the drugs we were encouraged to prescribe had little value beyond that of placebo, so I simply didn’t do it. Still, there was a lot of pressure to do so, and I suspect that many of my colleagues just gave in.

Not sure, but it’s puzzling that doctors do this. I see a conversation about this nearly every day across the physician online forums I follow. A lot of reasons are cited for working in these models. Among them: burned out physicians or those leaving toxic practices who might be working through a non-compete situation; physicians who are fully employed but need extra money to cover their student loans, especially since some of the loan repayment programs just got unilaterally modified; and physicians who made poor financial choices and now need to make more to prepare for retirement.

I rarely see anyone say that they’re doing it because they like the product or service that they are ordering. Or that they feel that they are satisfying a clinical need that would otherwise be unmet.

As for Mr. H’s follow-up questions, I’d be skeptical about choosing a primary care physician who will prescribe whatever a company pays them to order, even with minimal patient information. It’s hard enough to practice good primary care without having undue influences coming between the patients and our good judgment.

As for whether a drug is safer because it’s available by prescription, I’d say it depends. Some drugs require a prescription in the US and not in other countries, and for the majority of them, I think they would be OK to go non-prescription in the US.

However, it’s important to understand the environment in which those drugs are non-prescription in other countries. Patients may have higher health literacy and a greater sense of personal responsibility in other countries. Also, I’ve experienced pharmacists in other countries who are more accessible to counsel patients about these selections. 

Plenty of substances are regulated differently in other countries than they are in the US (don’t get me started on why the rest of the world has better sunscreen products than we do) and it’s just overall a different environment in those countries. Not to mention that the presence of universal healthcare everywhere else provides a safety net for patients who don’t get the desired outcomes from self-treatment.

It will be interesting to see the final poll results when they come in. Feel free to leave a comment when you vote on the poll, and as always, you are welcome to leave a comment here or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 7/17/25

July 17, 2025 Dr. Jayne 2 Comments

image 

It’s been one of those weeks where I’m pulled in so many directions I’m not sure which way I’m supposed to be going. Just when I think I’ve finished something, another obstacle turns up in my path and I have to swerve.

I’ve attended enough workplace resilience seminars over the years that pivoting from crisis to crisis seems second nature, even if it’s not fully in my comfort zone. Still, there’s something to be said for the excitement of doing the corporate equivalent of a “cattle guard jump” from time to time, so I’m happy to keep on keeping on.

From Universal Soldier: “Re: LLMs replacing physicians. What’s your take on projects like this?” Headlines abound for this kind of work, especially when the media talk about models achieving “diagnostic accuracy” or outperforming the average generalist physician.

The Microsoft AI Diagnostic Orchestrator (MAI-DxO) is claimed to deliver greater accuracy, but also to reduce diagnostic costs, compared to when physicians evaluate a patient. I don’t disagree with the fact that we need to figure out how to do workups efficiently and to improve cost savings, but I wonder about the ability to translate this work to bedside realities. Let’s inject some of the realities of the current state of medical practice into the model and see if it can come up with solutions.

We can add a medical assistant who is stuck in traffic and doesn’t arrive in time to room the first patient, increasing everyone’s anxiety level as the office tries to kick off a busy clinic session when they’re already behind before they even start. As the model suggests tests to order, let’s throw in some cost pressures when those interventions aren’t covered by insurance or the patient doesn’t have any sick time to cover their absence from work. Add in a narrow network that makes it nearly impossible to refer to a subspecialist even when it’s needed. Let’s add an influencer or two worth of medical misinformation to the mix. Now we’re getting closer to what it’s really like to be in practice.

It’s great to do tabletop exercises to see if we can make clinical reasoning better. But unless we’re also addressing all the other parts and pieces that make healthcare so messy, we’re not going to be able to make a tremendous difference. I would love to see an investigation of whether physicians can improve their clinical reasoning simply by having more time with the patient, or fewer interruptions when delivering care, reviewing test results, and formulating care plans.

I would also like people to start talking more seriously about how care is delivered in other countries, where  better clinical outcomes are achieved while spending less money. Maybe it’s just easier to talk about AI.

image

An informatics colleague asked me what I thought of the Sonu Band, which is a therapeutic wearable that promises “clinically proven, drug-free vibrational sound therapy” that has been proven to improve the symptoms of nasal allergies. The band is used in conjunction with the Sonu app, which works with the user’s smartphone to scan their face and combine it with voice analysis and a symptom report to personalize the therapy.

The company says that the facial scan produces skeletal data that is used to create a digital map of the sinuses. It then uses proprietary AI to calculate optimal resonant frequencies for treatment.

Having spent most of my life in the Midwest, I can attest that allergy and sinus symptoms seem to be nearly universal. I reached out to my favorite otolaryngologist for an opinion, and although he pronounced it “fascinating,” he hadn’t heard of it. If it works as well as the promotional materials say, I could imagine it flying off virtual shelves. If you’ve given it a whirl or seen it prescribed in your organization, we would love to learn more.

The American Academy of Family Physicians and its Family Practice Management journal recently reviewed some AI-enhanced mobile apps that target primary care physicians. This was the first time I had seen their SPPACES review criteria:

  • S – Source or developer of app.
  • P – Platforms available.
  • P – Pertinence to primary care practice.
  • A – Authoritativeness, accuracy, and currency of information.
  • C – Cost.
  • E – Ease of use.
  • S – Sponsor(s).

Even if you’re not in primary care (in which case you can feel free to omit the second “P”), this is a good way to encourage physicians to think about the sources of information they use in daily practice.

It’s not mentioned in the article, but the author also encourages physicians to be aware of whether their tools are HIPAA-compliant and whether they’re entering protected health information into third-party apps. He also mentioned that none of the apps reviewed are a substitute for physician judgment.

I would also consider adding an element to the “cost” criteria that encourages users to think about how the app is making money. People seem quick to overlook third parties that are monetizing user information, if they’re even aware of it happening at all.

I will use this as a teaching tool with students and residents, especially since they’re quick to download new apps without doing a critical review first.

I’m not sure how I missed this one, but OpenEvidence filed a complaint against Doximity last month, alleging that Doximity’s executives impersonated various physicians and used their NPI numbers to gain access beyond what they should have as lay people. Such activities are prevented by the OpenEvidence terms of use, assuming anyone actually reads them (they’re included in the complaint as Exhibit A if you’re interested).

The complaint alleges “brazen corporate espionage” and points out that Doximity “has built its brand on physician trust and privacy protection.” The defendants are alleged to have used prompt injection and prompt stealing to try to get at proprietary OpenEvidence code.

Pages 3 and 4 of the complaint describe a few examples of attacks in detail. The complaint notes that “this case presents the rare situation where defendants’ illicit motives and objectives are captured in their own words.” I always love reading a good court document and this one did not disappoint.

What do you think about corporate espionage? Can companies truly protect their intellectual property anymore? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 7/14/25

July 14, 2025 Dr. Jayne 3 Comments

There’s always a lot of buzz around wearables. The majority of US adults have a smartphone in their pocket or purse, so a treasure trove of data can be collected without adding a secondary device.

Most of the people I talk to have no idea how much information is being captured by the apps on their phones, let alone the types of entities to which vendors are selling their personal data. Nearly everyone I know leaves their location services on 24×7. About half the people I interact with, along with their families, use tracking apps to keep up with each other’s location.

An article in JAMA Network Open this week caught my eye with its title, “Passive Smartphone Sensors for Detecting Psychopathology.” The authors analyzed two weeks of smartphone data from 550 adult users see if “passively-sensed behavior” could identify particular psychopathology domains. They noted that this is important work because smartphones can continuously detect behavioral data in a relatively unobtrusive way.

They had two main objectives. First, to determine which domains of psychopathology can be identified using smartphone sensors. Second,  to look for markers for general impairment and specific transdiagnostic dimensions such as internalizing, detachment, disinhibition, antagonism, and thought disorder.

Data were pulled from global positioning systems, accelerometers, motion detection, battery status, call logs, and whether the screen was on or off.

The authors were able to link nearly all the domains with specific sensor-captured behaviors, creating “behavioral signatures” by measuring things like call volume, mobility, bedtime, and time at home. Specifically, they were able to link disinhibition with battery charge level and antagonism with call volume.

Based on the phone-related behaviors I observe, it would be interesting to see if my gut feeling about a user’s psychopathologic situation is accurate. I would also be curious to know if there is a difference in the data looking at other age groups that weren’t studied, such as teens or the elderly. Although the study was done on adults, the mean age was 38 with a standard deviation of 8.8, so there is certainly some opportunity to look in detail at other groups.

I was recently with a large group of individuals in their 70s. Their visible phone behavior would rank them right up there with the teenagers I know.

Reading about this made me think about all the data that companies are collecting now that they’re focusing on potentially eliminating remote work and ensuring high levels of productivity. There are plenty of stories out there about people using so-called “mouse jigglers” to make it look like they’re working so that their computers don’t go to sleep. Of course, companies that restrict what kinds of USB devices can be plugged in might be attuned to that, and there are also more sophisticated monitoring tools that also look at keyboard usage patterns and can detect if something shady is going on.

Remote work isn’t the only place people might be slacking off. I see plenty of people who have in-person jobs who constantly use their phones for potentially non-work activities. Many apps  might be adjunctive to job role and responsibilities, but I see a lot of online shopping and social media use as well.

I’d love to see some robust research that looks at communication and collaboration strategies within an organization to see which workers might thrive with one style more than another. I’ve worked in organizations that have documented communication plans that make it clear what kinds of work should be conducted using meetings, phone calls, email, instant messaging, and texting, but those kinds of policies are few and far between these days.

Even without a written policy, workplace culture defines how things get done, but when you’re a new person, a consultant, or a contractor, it can be difficult to try to figure that out unless someone clearly explains the rules of engagement.

I worked in one organization that basically used Slack as the connective tissue of the organization. I have to admit that I struggled there. Every time I asked where to find a resource, the answer was, “It’s in Slack,” but it didn’t seem like there was any rhyme or reason to how things were organized. More often than not, important documents were accessed through links within a message thread rather than being in a “files” area or in specific channels that made sense to those of us who were new.

A tremendous amount of work seemed to get done via direct messages rather than channels, making it even more difficult to find things. At one point, during a critical issue with a release, I had a separate cheat sheet of which conversations to look through when I needed certain kinds of information, since I had an endless list of direct message conversations with various combinations of the same group of people.

When I asked if there was any team- or company-level documentation on how it was all supposed to work, I felt like I was revealing myself as someone who simply couldn’t keep up. As a consultant, I had multiple conversations with leaders at the company about how this was working and how I had seen it contribute to process defect rates and rework. I also knew of plenty of examples at that company where people downloaded documents to their own hard drives so they could find things later, but who then ended up working off of outdated specifications since they were using local copies rather than shared ones. Not to mention that if people can’t find clear information, they are more likely to improvise or otherwise wing it, which is generally a bad idea when you’re building healthcare software.

If you could use data to find scenarios where someone was working on a deliverable – say, a slide deck or a document — and then spent 10 minutes rapidly flicking through various file structures or messaging platforms, opening and closing multiple documents, and doing web searches before finally returning to the document, it could be an indicator of disordered work patterns that might benefit from some kind of intervention.

If you see multiple people on a team with these work habits, that may be indicative of the need for a different kind of organizational structure for work product and other materials. I think those patterns are much more important to explore than knowing whether someone’s mouse is moving

What do you think about looking at smartphone or other device data to learn more about people’s behavior and the potential for psychopathology? Would having more information make things better or potentially make things worse? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 7/10/25

July 10, 2025 Dr. Jayne 3 Comments

image 

I finally had time to dig into the recent paper about the “accumulation of cognitive debt” that happens when using AI assistants.

As a proud member of Generation X, I first experienced those rites of passage called “the five-paragraph essay” and “writing a research paper” in middle school. My English teacher  — this was before everyone called it Language Arts or something else more inclusive — made us create a 3×5 index card for every reference. We had to have cards for every quote or idea we planned to use. For those of us whose brains were wired for reading and writing, it was a painful process. We just wanted to jump in and start writing. However, for others it was an exercise in organizing thoughts and making sure to have enough materials to support your conclusions.

Fast forward to my university days, when I was a teaching assistant for an English 101 “Thinking, Writing, and Research” class. Those pesky index cards were still recommended, although not required. Personal computers had just made their way into dorm rooms, but as I graded research essays, I could easily tell who knew how to organize their thoughts and who was simply phoning it in.

The professor I worked with always selected obscure topics for the assignments, so it was nearly impossible to copy the work of others. That made grading all those essays quite an adventure. This was the era when those with computers had to figure out how to best use them on an as-you-go basis, because there certainly weren’t any classes offered that explained the best ways to use various pieces of software. Subsequent generations always had access to computers for schoolwork, so I’m not sure how much of the process aspect of writing is still taught versus enabling people to just sit down at the keyboard and get to it.

Within that context, I started reading the paper. It looked at how three cohorts completed an essay writing task. LLM-only, search engine-only, and brain-only groups completed three writing tasks using their assigned method. They then had a fourth task where some of them were crossed to another group. The participants were monitored with electroencephalography (EEG) to assess the cognitive load during the tasks. Additionally, essays were assessed using natural language processing, scoring with the assistance of a human teacher and scoring by an AI judge.

The authors concluded that the brain-only group had the strongest brain connectivity, followed by the search engine group. The LLM group had the weakest connections. Additionally, participants in the LLM group had lower self-reported ownership of their essays and had difficulty quoting their own work. Ongoing analysis showed that “LLM users consistently underperformed at neural, linguistic, and behavioral levels.”

The authors commented, “These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning.” Given some of the personal statements that I’ve read for medical students over the last two years, there’s so much LLM use that it’s hard to get a feel for who the candidates really are as people. Maybe this research will convince folks to dial it back a bit.

image

I enjoy learning about new players on the healthcare IT scene. One that I’ve been watching in recent months is CognomIQ. The company’s semantic-based data management solution has been optimized for healthcare, in particular for research institutions.

The company originally caught my eye when I heard that industry veteran Bonny Roberts had joined the team as VP of customer success. She’s a long time HIStalk fan and served as co-host of the final HIStalkapalooza back in the day. I trust her to recognize the real thing.

The company’s CTO, Eric Snyder, can discuss the importance of data without succumbing to industry buzzwords or getting bogged down in jargon. He recently delivered a guest lecture for a data and visualizations class at the University of Rochester. He followed it up with a social media post on data literacy and the problems that happen when different parts of the healthcare system describe parts of the care continuum in different terms.

My favorite quote: “I struggle with the answer to the data literacy in healthcare problem because it’s like creating a second floor of a house when the first floor is propped up on sticks. We never solidified the foundation as an industry, instead we moved on to AI.”

I wish more people in the industry understood this way of thinking. I would even go a step farther to say that we’ve built a house of cards and now we’re putting AI on top of it, but I’m trying to be less cynical. Those of us on the patient care front lines have spent the last quarter century creating a tremendous volume of patient-related data that is just floating around and isn’t helping organizations reach their potential. I think of all the wasted hours of clinicians clicking and the back-end systems being unable to do anything useful with the data because of  lack of standardization or inconsistent standards.

Snyder has spent the better part of the last decade leading technology innovation work at the Wilmot Cancer Institute and understands the importance of data to solve complex problems. The platform can aggregate hundreds of data sources and transform it in an automated fashion, which sounds awfully attractive to those of us who have had to engage in weeks or even months of cleanup prior to embarking on reporting or research efforts.

I also have to give a shout out to the company’s CEO, Ted Lindsley, whose LinkedIn profile boasts, “Healthcare Data that doesn’t suck.” Honestly, seeing that made my little informatics heart go pitter-patter, because it’s incredibly refreshing to see someone who is excited about what they do and is ready to express it in no uncertain terms.

I reached out to Ted to learn more. He was willing to entertain my anonymous inquiries. Recent highlights include the company coming out of stealth mode, showcasing its work at the recent Cancer Center Informatics Society Summit, and announcing its seed round. He had some great analogies about technology leaping forward and had me laughing about moving from MS-DOS and Windows 3.1 to Windows 95, even though my ability to talk about that transformation likely betrayed my age. He’s certainly no stranger to the work that needed to give the industry a kick in the pants and get it moving ahead. I’m looking forward to seeing where CognomIQ goes this year and beyond.

image

The last couple of weeks have been pretty exhausting and free time has been scarce, so I had to rely on an AI-generated cake in celebration of this being my 1500th post. I was hoping to whisk myself to a beach to celebrate, but instead I have to make it through another major upgrade first. When I was a young medical student sitting down at a green-screen terminal to access lab results, I never imagined writing about my experiences with healthcare IT, let alone there being people who would read it on a regular basis. Thanks for supporting my work, and a special thank you to those readers who share their comments and ideas so I can keep the words flowing.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 7/7/25

July 7, 2025 Dr. Jayne 1 Comment

image 

I’ve spent a lot of my career working on the “softer” side of clinical informatics, such as change management, governance, adoption, and optimization. Although I’ve implemented a couple of technologies in my career that have been dramatic, most of the time I’m working on projects that are a little more subtle.

I’m appreciative of projects like that when I have to gain buy-in from difficult stakeholders. When they don’t feel like you’re yanking the carpet out from under them, they are more likely to align with the goals and objectives. On the other hand, sometimes when projects are too low-key they’re not perceived as valuable. It’s a fine line that has to be walked.

I can’t even count the number of practices where I’ve helped implement EHRs over the years. I’ve worked with people ranging from those who have never used computers prior to the EHR to those who have been using them since birth.

In the early days of EHR, people used to talk about the “older” physicians being resistant. Fortunately, I had a good story to counter that after meeting a curmudgeonly colleague who informed me that he had been “advocating for electronic charting since long before you were born, young lady.” He and I actually competed for the first EHR-related role in our health system. I think he was a little grumpy that he didn’t get the position. I grew to appreciate his point of view as he pushed back on some of the things we were trying to do, because he always wanted to make things just a little bit better.

I’ve also worked with younger physicians who were incredibly resistant to adopting technology, particularly anything other than the one that they personally felt was the best. There’s nothing quite as entertaining as watching an Apple devotee argue with the IT team about how he absolutely, positively cannot use the PCs that are present in every shared workspace in the hospital. Folks like that were especially fun during the early days of “bring your own device” programs. They demanded to be able to use hardware that didn’t comply with the published standards.

I’ve worked with ER physicians who complained about how long it took them to do their charts, yet were found to be spending a good chunk of their day on the Zappos website. 

These examples show how differing perspectives and experiences can have a tremendous impact on the success of a project. In turn, how those outcomes can ultimately influence the patient experience. When you have one physician in a practice who refuses to do the recommended workflow, it can cause extra work for the staff. It can also result in confusion and delays for patients who are waiting for their results or for a response from the physician.

I’ve long wondered what makes one person think a new solution is awesome and another one thinks it’s awful when they are doing the same work and caring for the same patients. An informatics colleague and I were talking about this over a recent round of cocktails. She brought up a recent study from the Proceedings of the National Academy of Sciences that looked at how different people perceive works of art.

Although I lived with an art history major for a number of years, I hadn’t heard of the concept of the “Beholder’s Share,” where a portion of a work of art is created by the memories and associations of the person viewing or experiencing it. I suppose it’s a more academic rendering of the idea that beauty is in the eye of the beholder.

The researchers behind the article employed high tech means to look at it, however, using functional MRI (fMRI) imaging to identify how people used their brains differently when viewing different types of art. Apparently abstract art results in more person-specific activity patterns, where realistic art delivers lass variable patterns. They also noted activity in different parts of the brain when looking at abstract art. 

I’d love to see how different end user brains would react to differences in EHR screens and workflows. Maybe we could use that information to better predict how users will perform with different tools. Instead of looking at a subject’s brain activity while looking at a Mondrian painting, as the study did, we could see how their brains perform when confronted with different user interface paradigms.

I’ve seen EHR and clinical solution designs over the years that were jarring in color or layout. I’ve seen those that were so vanilla that nothing seemed to catch the user’s attention.

Another concept in the art world is that of shared taste. It explains why some groups of people prefer the same things, where others might find them objectionable. People typically know if they prefer art from classical times, the Renaissance, the Impressionists, or from abstract or modern artists, I would bet that we can create groupings around different types of clinical data visualization and how they can best be used in patient care.

Similarly, I would be interested to see if users who have certain sentiments about a given piece of technology can be grouped in a particular way, such as by specialty, user demographics, location, or tone of the program where they completed their training. Similar to the concept of precision medicine, I wonder if we could use that information to create precision training or a precision technology adoption curriculum that could help users adapt to new tools that end up in their workflows.

Even without the expense and risk of something like fMRI scans, I would bet that we could do a lot in clinical informatics to better understand our users and the learners with whom we are engaging. I’ve seen quite a few surveys that ask new employees about their experience with electronic documentation or technology in general, but they are fairly superficial. They usually have questions like, “Which of the following systems have you used?” with a list of vendor names. They don’t recognize if the user was on a heavily customized version or an out-of-the-box configuration. Most users wouldn’t know anyway unless they have experience behind the informatics curtain. 

Institutions have come a long way recognizing different learning styles and whether people prefer classroom, asynchronous, or hybrid learning methods. I don’t doubt that the training and adoption efforts that we see today might be supplanted by other paradigms in the future.

Is the beauty of the EHR in the mind of the beholder, or is it something with which users simply have to cope? Is one platform more abstract than the other? Will we ever see an EHR with a classical sense of style? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 7/3/25

July 3, 2025 Dr. Jayne 3 Comments

clip_image001 

I’m not a superfan of The Joint Commission, but I was interested to see their press release about partnering with the Coalition for Health AI (CHAI) to create AI best practices for the US healthcare system. The partnership plans to develop AI tools, playbooks, and it wouldn’t be The Joint Commission without a certification program as one of the offerings.

If anyone wants to lay odds on the cost of such a program, I’m happy to run the betting pool. Initial guidance will be issued in the fall, with AI certification to follow. I’ve done consulting work around patient-centered medical home recognition, EHR certification, and other compliance-type efforts, so I’ll be looking for the devil in the details as they are released.

image

As a primary care physician at heart, I’m sensitive to the multitude of recommendations that we give to our patients, often all at one time. For example, a patient who is newly diagnosed with diabetes may need to have labs drawn, see a diabetic educator, visit an ophthalmologist, consult with a podiatrist, and manage prescriptions from a retail pharmacy and a mail order pharmacy. Health systems are investing in solutions to reach patients via patient portal, text, interactive voice calls, paper mail, and email, which has resulted in patients being overwhelmed. I’m intrigued by Lirio’s concept of “Precision Nudging” (they have trademarked the term) to help manage this problem.

AI is involved via their large behavior model that aims to use elements of behavioral science along the way. It pulls together engagement and outcomes data with consumer understanding to identify the most appropriate channel to reach a given patient. Interventions are modified based on patient response and are tweaked along the way.

I have followed other companies like this over time, but Lirio seems to get it better than others, going beyond vague concepts like “wellness” and “engagement” to actually talk about specific screening programs and revenue-generating interventions that can boost patient quality and deliver a solid return on investment. They do have a bit of a revenue cycle background, so I’m sure that helps.

I was also geeked to learn that the company’s name actually has meaning rather than being something that either just sounded good or hadn’t been registered yet, as one commonly sees in younger companies. It’s actually named after Liriodendron tulipifera (the tulip tree), which apparently is the state tree of Tennessee. Props to the marketing team for its use of the phrase “lustrous branchlets” to describe the company’s strengths. This wordsmith salutes you.

Mr. H already mentioned this, but I wasn’t surprised to see that Best Buy has sold Current Health, returning the company to its former CEO and co-founder. A Best Buy executive said that growing its home care business has “been harder and taken longer to develop than we initially thought.”

I can understand that given the performance of their booth team at HIMSS25. On one of my booth crawls, my companions and I stood in their large booth for probably 5-7 minutes chatting before anyone approached us, despite there being multiple employees in the booth staring at their phones. I didn’t mind it too much because we were enjoying their extra-thick carpet, but if they were looking to capture leads, they were falling down on the job. Once a rep finally approached, the conversation was passable, but negative first impressions are hard to undo.

As much as I think I’m with it as far as keeping up with healthcare IT news and trends, I still rely on HIStalk for information on a regular basis. There’s always some tidbit that I haven’t gotten to yet, which is not surprising given the calamitous state of my inbox these days. HIStalk was the first place I learned about the new CMS prior authorization program for traditional Medicare. I’m all for catching bad actors, such as the durable medical equipment companies that cold-call patients offering knee braces and other questionable interventions, then rely on relatively clueless physicians who have rented out their medical licenses to enable a high-volume prescription mill situation.

However, I feel like the majority of physicians caring for our nation’s seniors aren’t committing fraud. They are negotiating the complex interplay between evidence-based medicine, the costs of various treatments, and patient beliefs and preferences. Sometimes the “best” treatment is unaffordable for a given patient, or you’re working with patients who can barely afford food, let alone their medications.

They’re going after specific procedures, including knee arthroscopy for arthritis, along with skin and tissue substitutes and nerve stimulator implants. You know what else would help reduce these unneeded procedures? Greater health literacy and patient education campaigns, which are parts of public health that we continue to neglect in this country. Hopefully the program will remain with these high-dollar, low-benefit procedures and won’t creep into primary care on the whole.

Given the amount of data that CMS has on every prescriber’s habits, they should be able to hire some clinical informatics folks to find those who are practicing inappropriately and go after them rather than putting processes in place that annoy those who are trying to do the right thing.

image

I recently had a rough travel day with significant delays. As I was waiting for my inbound aircraft to arrive, I noticed two fire trucks pull up on the tarmac. They did a quick test that I recognized as preparing to deliver a water salute. I’ve seen it for Honor Flights that were returning to the airport and for a pilot retirement.

Since the airport was small, I could see my inbound plane taxiing at a slow speed, which was unusual given the airline’s propensity to get planes to the gate quickly, especially after delays. A few minutes later, a Marine Corps Honor Guard arrived and I realized this flight was carrying a deceased service member. The waiting passengers in the terminal gradually fell silent and stood to show their respect, with hardly anyone moving until the transfer was complete. It was a sobering reminder that no matter how bad I felt my day was, steps away from me was a family that was having one of the worst days of their lives.

As we approach the Independence Day holiday, I’m grateful for everyone who has put on a uniform and sworn an oath to protect and defend our country. Freedom comes at a high price. Thank you to all current and former service members and their families for being willing to make that sacrifice.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 6/30/25

June 30, 2025 Dr. Jayne 2 Comments

The Journal of Graduate Medical Education published a thought-provoking article this week titled “A Eulogy for the Primary Care Physician.” It reflects on the original purpose of a primary care physician as the trusted physician “who knows their health inside and out, who guides them through the complexities of the medical system, and who fosters relationships not with charts, but with people.”

This is exactly the kind of old-timey family physician that many of my peers and I thought we would become. That’s what we were trained to be during our residency programs. Little did we know that the forces that would actually align against our being able to do that.

First, there were the turf wars. I was trained to perform a variety of procedures during residency, including minor office-based surgeries, biopsies, wound management and repair, and sigmoidoscopy. I was also trained to deliver prenatal care and perform non-operative deliveries in partnership with local OB/GYNs who served as backup.

I quickly found that I wasn’t able to do most of those things in my hospital-sponsored practice. Family physicians weren’t allowed obstetric privileges, full stop, even if we had an OB/GYN who agreed to back us up. One of the hospitals where I was forced to be on staff didn’t even have obstetrics, which somewhat limited my ability to recruit newborns to the practice.

After six months of appeals, I was allowed to seek newborn nursery privileges at a competitor hospital in an attempt to maintain that part of my skillset, although caring for infants became increasingly rare.

Second was the pressure for primary care to support the volumes of all of the other specialties. If there was a procedure to be had, I was expected to send those patients to my proceduralist colleagues so that they would have adequate volumes.

Numerous procedures can be done by appropriately trained primary care physicians in a high-quality and cost effective manner. However, I was told that it was unseemly to hoard those procedures, and I needed to refer them out and show that I was a team player. It didn’t matter that patients would prefer not having to make a second appointment, take off work again, or pay a second co-pay.

The only thing I was able to hang onto were the skin biopsies, because I could do them relatively quickly and they didn’t have a significant supply need or cost therefore they were somewhat “invisible” to the medical group administrators who actually ran the show.

There were a hundred other things that steered my work as a family physician in a different direction from what I thought it would be. When I was offered the opportunity to work with the electronic health record project, I jumped at it. Maybe that would be the answer to regaining autonomy since I would be able to run reports and see data on my work without external support. Previously, I had to rely on the business office to do so via our green-screen practice management system.

Because of my protected time to work with the EHR, I was somewhat buffered by the pressures to constantly see more patients, although I was still juggling dozens of patient messages and requests on the days when I wasn’t in the office. In hindsight, I probably worked 1.25 FTEs during that time, despite being paid as a 1.0 FTE, but I was the only person in my position and I didn’t know how to push back given the pressures that were on the other primary care physicians in my group and which seemed worse at the time.

Although the Eulogy article cites burnout, declining reimbursement, and private equity as significant contributors to the demise of the primary care physician, I would add other elements. The consumerization of healthcare continues to be a major force, as physicians are incentivized around patient satisfaction, sometimes to the detriment of quality of care.

As an example, two areas on which physicians are incentivized are patient satisfaction and avoidance of unnecessary antibiotics. For every patient who calls wanting a Z-Pak for what is undoubtedly a viral illness but who “wants to get ahead of it” or says “I know my body and what I need,” there is only a lose-lose situation. I’ve been roasted via online review sites for refusing to call in antibiotics without seeing a patient. I’ve been threatened with complaints to the state board. I’ve been ripped in Press Ganey surveys.

My quality numbers remained high, but when you get bad reviews (justified or not), your paycheck suffers. Physicians should not be placed in these crosshairs, but we do it every day. I know it’s the proverbial dead horse, but educating patients about the risks of unwarranted antibiotic prescriptions is another public health intervention at which we’re not very good.

When I had the opportunity to expand my informatics work and change to a different environment for patient care, it was bittersweet. Although I missed the regular “continuity” patients with whom I had bonded over five years, I was glad to get out from under all the patient portal messages and communications that didn’t stop while I was out implementing the EHR, training peers who refused to work with non-physician trainers, and trying to figure out our group’s strategy for health information exchange.

I thought that would be the death of my career as a primary care physician, but little did I know that once I started working in the emergency department and urgent care settings, more than half of my work would be primary care anyway, since many of our community used those environments for their primary care services.

The Eulogy states, “The PCP is survived by the independent physician assistant, nurse practitioner, and generative artificial intelligence.” As someone who is starting to have more encounters with the patient side of the healthcare system than I would like, I worry quite seriously about how my generation will be cared for in the future.

Every time I see my own primary care physician, who is a few years older than I am, I don’t leave without asking the question of when he sees himself retiring so that I’m not caught in the lurch. Fortunately, most of my subspecialist physicians are younger than I am, so I’m less worried in those areas.

With regard to generative AI replacing primary care, I think we have many years of it augmenting rather than replacing. I’ve been unimpressed by many of the solutions that I’ve seen. I hope clinicians remain skeptical as developers work through issues with quality.

What do you think about the death of primary care in the US and how healthcare information technology might be able to resurrect it? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 6/26/25

June 26, 2025 Dr. Jayne 4 Comments

The two hot topics around the virtual physician lounge this week, not surprisingly, involved comments or policies from the new Secretary of Health and Human Services.

Physicians are dreading upcoming changes in vaccine reimbursement, which will likely come if vaccines are no longer recommended by the reconstituted Advisory Committee on Immunization Practices. Commercial payers follow the actions of government payers, which could lead to a fair amount of work for everyone with an EHR and billing system as they reconfigure their systems to follow new rules.

I can also already see the new policies fueling burnout in the organizations that I’m closest to, as physicians again have to defend their practice of evidence-based medicine. It’s not a great time to be a frontline physician, particularly one in primary care.

The other hot topic was around statements that everyone in the US should have a wearable medical device in the next four years. Continuous glucose monitoring devices appear to be the darling of the day, with much skepticism from physicians who have already had to deal with the data provided by current users. There’s not a lot of data that supports the use of the devices unless a person is diabetic, prediabetic, or has one of a handful of other medical conditions.

Just wearing a device doesn’t drive the needle, either. Other services are needed to support patients as they make changes in their health, such as dietitians, nutrition counseling, and behavioral health interventions. Those also have a cost.

It’s great to say that people should take charge of their own health, but for those of us who have been in the public health trenches for decades, we know that patients can’t always control the fact that they live in a food desert. They can’t control their genetics. We live in a nation where health literacy is close to rock bottom.

Kennedy stated, “They can see, as you know, what food is doing to their glucose levels, their heart rates and a number of other metrics as they eat it, and they can begin to make good judgments about their diet, about their physical activity, about the way that they live their lives.” Having cared for thousands of average Americans during my medical career, I would hypothesize that less than a quarter of the patients I’ve seen would be able to take a device out of the package start managing their diet in the way that he describes without some serious intervention.

I don’t disagree that the nation needs a crash course in self-care. I’ve seen it myself as patients come to the emergency department for the common cold without having taken so much as an acetaminophen tablet. We see wounds that haven’t been washed with soap and water and sprains that were never iced, among other things, as the patients come to us and leave with a $1,000 hospital bill.

I would much rather see the country start pouring money into public health interventions that have been proven effective than see us throwing money at technology without all the ancillary services needed to truly drive the needle for patient outcomes. The nation’s many Federally Qualified Health Centers know a thing or two about this, as do the many county, city, and state health departments.

image

Speaking of high tech public health, I continue to be fascinated by the data and analytics around using wastewater for disease monitoring. I first heard of it during the height of the COVID pandemic, when it was being used to model the level of disease in our community, especially when we were having shortages of testing supplies for use in the office.

WastewaterSCAN is a nationwide system that is based at Stanford University in partnership with Emory University. It monitors 11 infectious disease indicators via bottles of wastewater that are shipped from around the country. Viral RNA material is sturdy stuff, and the diseases tracked include COVID, influenza A and B, respiratory syncytial virus (RSV), enterovirus, norovirus, hepatitis A, and many more. CDC also conducts testing, as do many municipalities, and most of the data is publicly available.

The higher-level information that is gleaned from wastewater can help public health agencies and care delivery organizations understand what viruses are surging in their communities and might be useful for creating recommendations on when to start testing for various diseases. It can also be used to inform staffing levels within facilities or to pinpoint increases that might result in an outbreak, such as high levels of a virus at an airport or tourist spot.

The next frontier in wastewater research involves identifying bacteria, although the process poses challenges. You can have two bacterial species that are similar, but only one causes a serious disease, and it can be difficult to differentiate between the two. Here’s to seeing what the next decade of innovation brings in this field, and to knowing that many of us are contributing to scientific advancement with every flush.

image

For some time now, I’ve been trying to clean up my provider data, including my NPI registration that still lists an address where I haven’t worked in a very long time. There are also problems with my CMS data, and the process to try to correct that is even worse.

In a typical employed provider situation, an office manager or administrator usually manages that for the employed physicians. But when you’re a 1099 contractor working for multiple organizations it’s up to the physician to make it work. The governmental data is particularly pesky, because it feels like you have to get a log in to site A that then gives you access to site B but the passwords time out and are a pain to reset. If anyone has a cheat sheet for cutting through all of this, feel free to send it my way. My current clinical situation tells me I’m on my own. Maybe someone could create an AI-powered bot that could take care of it all.

image

I was back on the patient side of the equation this week and was surprised by the speed at which I received a message that I had “new test results” in the chart. Once I made it through some pesky password issues that involved using my laptop instead of my phone, I was disappointed to find that the “result” was simply a message from the lab that said my specimen had been received. It’s been a long time since I’ve done lab interface work, but I would hope that such messages might be filtered to avoid causing extra anxiety for patients.

I was also disappointed by the quality of the visit notes that I could see. I was weighed during my visit, but my height was not measured. However, the note had a documented height but no weight, although a BMI was there. The combination of vitals showing up in the note seemed odd. The note from the first physician that saw me had no fewer than six exam findings that were most certainly not examined. Although one could blame templated documentation, there really is no excuse. If you’re not doing an ear, nose, and throat exam on every patient, it takes about 12 seconds to remove that from your template forever.

When you’re a patient, what’s your biggest frustration with healthcare information technology? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 6/23/25

June 23, 2025 Dr. Jayne Comments Off on Curbside Consult with Dr. Jayne 6/23/25

You really don’t know how much you rely on certain technologies until they’re not available.

At one of our local hospitals, a PACS upgrade during daytime hours threw quite a few clinicians for a loop. I don’t think the IT teams really understood how important muscle memory is for clinicians who are trying to work efficiently in the EHR while seeing patients. Although a workaround was provided, it required physicians to go to a different part of the EHR to view images.

It sounds like some users had security issues and weren’t able to do their work from the new location, which caused frustration that was made worse by long wait times when they called the help desk. Even for those who were able to use the new link to access images, there were complaints that it took half the shift to get used to the new workflow. Later in the evening, it reverted back, which required another shift.

I’ve done plenty of upgrades in my career and I’m not sure what would be happening behind the scenes that would justify doing an upgrade during daytime hours. Most of the upgrades I’ve been involved in were conducted overnight so that they caused minimal impact to clinical workflows.

Based on the fact that nearly all of the IT decisions I’m seeing lately are made with significant attention to cost, I can hypothesize that it likely played a role. Still, I wonder if the people looking at that cost-benefit equation looked beyond the IT resources to include the cost for clinician inefficiency and the risk of clinical quality issues.

A colleague shared the downtime notification with me because they knew I wouldn’t believe it otherwise. I was surprised to see that it included mention of another clinical system that was being taken down from midnight to 2 a.m. the following weekend, so I’m sure there was some reason that this one was being done during peak hours.

If I had been on the leadership team that approved the communication, I would have recommended a mention of why we were doing the upgrade during the day. Users would at least understand that we had thought about them and were forced by extreme circumstances to do it that way.

I also was a fan of running our communications past people in different settings before finalizing them — including academic physicians, hospitalists, and community physicians — to make sure that we were covering all perspectives.

Just out of curiosity, I looked back through some communications from one of my hospitals to see if I could identify patterns from the biweekly newsletters. I was surprised to see that the newsletter had the same top blurb over a six-week period without any changes, which to me would create a risk for people ignoring the newsletter because they may have felt like they had already seen the materials.

I also noticed that over the last six months, the newsletter had become a compilation of unrelated blurbs rather than a more cohesive document. In the current version, each entry had different font and color schemes, including color choices that don’t meet accessibility guidelines for colorblindness. It also looks like it’s in a different order every time, with no standard formatting.

I would think that adding a framework to it might be useful so that people can quickly identify the items that are important to their work. Maybe start with a section for global updates that impact everyone, then move to updates by specialty, care setting, or a host of other categories that would keep people from having to wade through tons of irrelevant information.

I thought about offering some feedback (after all, I’m still a dues-paying member of the medical staff) but there wasn’t any information in the newsletter about who to contact if you have questions. I’ll just stay in the back row with my “Courtesy/Non-Admitting” privileges and hope I don’t have to look at any patient charts any time soon.

image 

I have several major presentations coming up. For once, my week wasn’t completely full of back-to-back meetings. I decided to do some personal development while I was creating the slide decks and see what AI has to offer.

I try to make my slides as non-wordy as possible, often choosing images that tell a story, or images that prompt me to talk about certain content rather than having too many formal text elements on the slide. I always create an outline-style summary first, so it seemed ideal to be able to take that outline and hit it with some AI and maybe save a little time. I tend to be a little stuck in my ways about backgrounds and formatting, so I was looking forward to spicing things up a little bit.

Unfortunately, what my AI friend came up with was entirely unusable. Not only did it just drop the outline into slides in a somewhat disjointed fashion, but the backgrounds it selected bloated a 25-slide deck up to over 80 MB in size. I could see that being possible if I were incorporating high-resolution radiology images or something like that, but this was just from backgrounds and non-critical design elements.

I guess I’m back to creating my presentations in the old-school way, at least until I have time to research whether there is some other way to use the tools differently, or until one of the savvy college interns agrees to give me a quick tutorial on how to not wind up in that place again. When I finished that slide deck in my usual way, it ended up well below 2 MB, so I’m still not sure what happened the first time around.

One of the presentations I was creating was for first-year medical students, introducing them to clinical informatics and explaining the kind of work done by physicians in this space. The incoming students are coming into an educational environment that’s so different from where I trained, and I have to say that I envy them a little bit. Here’s to hoping that I don’t wind up being talked about as someone who was out of touch or uninteresting. Fortunately, my session is a lunchtime one with free food, so I don’t think attendance will be a problem.

If you could go back in time to when you were first learning in your field, what do you wish you had done differently? Leave a comment or email me.

Email Dr. Jayne.

Text Ads


RECENT COMMENTS

  1. "most people just go to Epic" that's a problem because then EPIC becomes a monopoly in healthcare, if it isn't…

  2. Only if CEO can post 'bail' which nowadays stands at 1B$ paid directly to the orange president or his family…

  3. I enjoy reading about the donations to Donor's Choice by HIStalk members. I also believe in the worthiness of Donor's…

  4. I hear you, and I agree—HIMSS is definitely facing some big challenges right now. The leadership and governance issues you…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.