Home » Dr. Jayne » Recent Articles:

EPtalk by Dr. Jayne 7/31/25

July 31, 2025 Dr. Jayne Comments Off on EPtalk by Dr. Jayne 7/31/25

image

There was some good discussion around the virtual physician lounge this week as one of my colleagues shared a recent article in Nature Scientific Reports about using AI to diagnose autism spectrum disorder and attention-deficit / hyperactivity disorder in children and adolescents.

Diagnosing these conditions can be challenging for primary care physicians who have limited time with patients and for parents who might wait months for their child to receive an appropriate assessment. In my city, the wait for a non-urgent assessment by a child and adolescent psychiatrist can be up to a year. Delayed diagnosis leads to delays in care.

The study still needs refinement, but preliminary results show that a sensor-based tool can suggest a diagnosis in under 15 minutes with up to 70% accuracy. The researchers began with a hypothesis that diagnostic clues can be identified in patients’ movements that are not perceptible to human observers, but can be detected by high-definition sensors. The authors catalogued movement among neurotypical subjects and those with neurodevelopmental disorders to inform a deep learning model. The movements were tracked by having the subjects wear a sensor-embedded glove while interacting with a target on a touch screen. The sensors collected movement variables such as pitch, yaw, and roll as well as linear acceleration and angular velocity.

I admit I was having flashbacks to some of my physics coursework as I read the paper, but it still kept my attention. The authors plan to continue validating the model in other settings, such as schools and clinics, and to validate it over time. The study has some limitations, namely its size. It had only 109 participants and some of those had to be excluded from the final analysis for reasons including inability to complete the exercise, motor disabilities, or problems with the sensors.

The participants were also a bit older than the typical age when diagnosis occurs, which could limit its broad applicability. Still, the ability to detect condition-related markers in an objective way, as opposed to having to use behavioral observations, would be a big step forward, especially if the study can be powered to significantly increase the sensitivity and specificity of the model.

image

Quite a bit of conversation occurred around a recent meta-analysis that looked at the number of steps adults should take in a day. Most of the patient-facing clinicians I know don’t have trouble getting their steps in on regular workdays, although some specialties have a fair amount of seated time, such as anesthesiology and pathology. A couple of folks I know are obsessed with getting a minimum of 10,000 steps each day, however, which is less important according to the recent article.

The authors looked at studies published since 2014 and concluded that individuals who got between 5,000 and 7,000 steps per day had a significant risk reduction for cardiovascular disease, dementia, and falls as well as all-cause mortality.

That’s not to say there’s a downside to getting 10,000 steps a day, but no clear evidence supports that specific number across the board. That’s good news for those of us on the IT side of the house who might spend less time ambulating than we’d like.

image

While we’re at it with our virtual Journal Club, another study that caught my eye this week looked at the benefits of the four-day work week. The authors looked at 141 companies that allowed employees to reduce workdays without a corresponding change in pay and found that the practice decreased employee fatigue, reduced burnout, increased job satisfaction, and improved efficiency compared to 12 control companies.

The process wasn’t as simple as just trimming days, however. Companies had to commit to some level of reorganization beforehand, focusing on efforts to build efficiency and collaboration prior to embarking on the six-month trial. There were 2,896 employees involved across companies in the US, UK, Australia, Canada, Ireland, and New Zealand.

I’ve worked with a couple of vendors who have instituted this practice. Their employees seem to be satisfied with the practice. I enjoyed living vicariously through the account reps who used their long weekends for camping and backpacking.

One of the companies sold a patient-facing technology with 24×7 support, so extra coordination was involved to ensure that those workers had adequate days off even though the rest of the company was closed on Fridays. I’ve also seen some healthcare organizations do this with their management teams, although it doesn’t seem that big of a stretch when the organizations already had hundreds of workers whose routine schedules involved three 12-hour shifts and leaders were already used to providing management coverage 24×7.

From Yes, Chef: “Re: this week’s Morbidity and Mortality Weekly Report. I would have loved to have been part of the public health informatics team crunching that data.” The report details an incident that involved a pizza restaurant not far from Madison, WI last October. Apparently 85 people experienced THC intoxication after eating from the restaurant, which shared kitchen space with a state-licensed vendor that produces THC edibles. When the pizza makers ran out of oil, they used some from the shared kitchen, unknowingly putting some “special sauce” into their dough. Public health informatics is one of my favorite subdisciplines of clinical informatics, so here’s a shout-out to all the disease detectives out there who solve mysteries like this one every day.

image

I’m trying to slow the volume of emails hitting my inbox, and HLTH seems to be one of the biggest offenders. The organization has been averaging three emails a day over the last month and attempting to manage my preferences hasn’t seemed to make a difference. Before clicking delete, I looked at the registration options for this year’s conference. It looks like it’s $2,995 and goes up to $4,100 next week.

I get that it’s an all-inclusive registration and includes two meals on most days, but it’s still a large amount to ask companies to spend on top of travel and lodging. For the average consulting CMIO, unless I can get some good meetings scheduled, the price isn’t worth it. Of course, media and influencers can apply to attend for free, but that’s hard to do when one is an anonymous blogger.

If you’re experiencing an overloaded inbox, who is the biggest offender? Have you found unsubscribing helpful or do you have other strategies to share? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 7/28/25

July 28, 2025 Dr. Jayne 5 Comments

image

Several people have asked for my opinion about Bee, which Amazon is acquiring. The company makes the Pioneer, a wearable that records and transcribes your day. It captures not only what you say, but also the conversations of those around you. It tries to entice users by providing summaries of the day, reminders, and other suggestions from within its companion app.

Unsurprisingly, the solution also requests permission to all of the user’s info, including email, contacts, location services, reminders, photos, and calendars in an attempt to create “insights” as well as a history of the user’s activities.

The device costs $50, which can be avoided by using the Apple Watch app, and then a $19 per month subscription on top of that. The solution uses a mix of large language models to operate, including ChatGPT and Gemini.

A quick visit to my favorite search engine pulled up a number of pages that mention the device. Some reports say that it isn’t able to differentiate between the wearer’s conversations and what they were watching on TV or listening to on the radio.

I wasn’t surprised at all to hear that significant privacy concerns have been expressed. The company keeps transcripts of user data, although it doesn’t store the audio. I laughed out loud when I read quote from an Amazon spokesperson who said that Amazon “cares deeply” about user privacy and plans to give users more control over how their data is used after acquiring the startup.

Along with anyone who has had to go through multiple levels of annoying menus (that seem to change regularly) while trying to rein in their Alexa device, I’m not buying it. Although Amazon claims to not sell customer data to third parties, they have plenty of uses for it in-house. Anyone who visits Amazon can see how their targeted marking winds up in different places.

Putting on my end user hat, I have to say this is one of the more ridiculous tools, offerings, or solutions that I’ve seen. However, there must be a huge number of people who disagree with me, because if it weren’t a potential moneymaker, I don’t think Amazon would be acquiring it.

What if the user is located in a two-party consent state and is now recording conversations without notifying the other parties? I found a funny video about the device, where Wall Street Journal reporter Joanna Stern said it “turns you into a walking wiretap.” She also asked the device to do an analysis of her use of swear words over the course of the month and shared her statistics in a funny recap.

The company’s website plays a pretty mean game of buzzword bingo. Examples: “turns your moments into meaning”and ”earns and grows with you” as it “sits quietly in the background, learning your patterns, preferences and relationships over time, building a deeper understanding of your world without demanding your attention.”

The website shows an example of a user and their team “discussing ideas for the next product release.” That’s right, you can wear it to the workplace and have it collect all the company’s intellectual property over the course of the business day. I’m betting that most company’s employee handbooks don’t have language that addresses this. If I were in the corporate compliance department of anywhere with employees, I’d be sending out a memo ASAP.

The website also gives examples of how the device and its app can dispense parenting advice and manage issues such as “dealing with resistance to potty training and handling emotional outbursts.” I’m sure that pediatricians and family physicians will be thrilled to review the device’s recommendations at well-child visits (sarcasm intended) along with everything else they need to cover.

The website also had the device’s terms and conditions, which were 10 printed pages long. Here are some of my favorite highlights:

  • By accessing the device, you agree that you have read, understood, and agree to be bound by all the terms, which can be unilaterally altered at any time and for any reason. The company will alert users simply by updating the “last updated” date on the terms page, and users “waive any right to receive specific notice of each such change” and accept the “responsibility to periodically review these Legal Terms to stay informed of the updates.”
  • Bee specifically calls out in the second paragraph that it offers no HIPAA protection.
  • The user accepts the responsibility to be compliant with any applicable laws or regulations and agrees to terms regarding the collection of data with respect to minors.
  • Users are prevented from disparaging the company or its services.
  • Users agree not to use information obtained “in order to harass, abuse, or harm another person.”
  • Users agree not to “harass, annoy, intimidate, or threaten any of our employees or agents engaged in providing any portion of the Services to you.” The use of the word “annoy” caught my attention, since I can’t imagine an employee engaged in customer service or support who doesn’t find at least some percentage of the users with whom they interact to be annoying.

I found some user comments on Reddit and the following phrases were some of my favorites:

  • I made the mistake of using the app to retrain my voice, and since then it doesn’t think I’m EVER talking, everything I say is recorded as “unknown”. So instead of thinking other people were me, now I’m not even me.
  • While the little convo summaries are often amusing, now I am trying to figure out how this thing is supposed to be useful.
  • Users accused the system of “trying to gaslight me.” Some of us get enough of that in our daily lives, so we don’t need an AI tool to contribute as well.

The website says the device is sold out, although the company is taking back orders and plans to ship new units by September. That means either their marketing team is trying to create some FOMO (fear of missing out) or that lots of people are ready to take the plunge, privacy be damned.

What do you think about the Bee Pioneer? Would you consider wearing one? Are you taking steps to specifically ban it and similar devices and applications from your workplace? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 7/24/25

July 24, 2025 Dr. Jayne Comments Off on EPtalk by Dr. Jayne 7/24/25

image

JAMA Network Open recently published an Original Investigation titled “Patient Care Technology Disruptions Associated With the CrowdStrike Outage.” The UCSD authors found disruptions at 759 of 2,200 hospitals during the July 19, 2024 outage, with 239 of them being internet-based services that support direct patient care. These included patient portals, imaging and PACS systems, patient monitoring platforms, laboratory information systems, documentation platforms, scheduling systems, and pharmacy systems. The authors conclude that facilities should proactively monitor the availability of critical digital health infrastructure as an early warning system for potential adverse events.

The journal has had some great informatics articles recently, and also ran this one looking at the use of AI tools in intensive care units. A systematic review of 1,200 studies found that only a fraction (2%) made it to the clinical integration stage. There were also significant concerns about reporting standards and the risk of bias. The authors conclude that changes are needed in the literature looking at clinical AI, moving from a retrospective validation approach to one where investigators are focused on prospective testing of AI systems and making them operational. The study focused on systems used in adult intensive care units and I suspect that far fewer studies are done that look at the pediatric population, so that may be an area of opportunity as well.

From Savannah Banana: “Re: stadium naming rights. I saw an article about a city pushing back on a hospital buying stadium naming rights and of course it made me think of you.” Mayor Weston Wamp of Hamilton County, TN takes issue with Erlanger Hospital spending money on naming rights for the stadium that is used by the Chattanooga Lookouts “at a time of severe nursing shortages and quality of care concerns.” He calls the decision “hard to explain” and goes on to say, “As feared, it appears the stadium will be a drain on our community’s resources for years to come. Before I was elected, the Lookouts convinced city leaders to give the team all revenue from naming rights on this publicly owned facility. Now, in a sad twist, our local safety net hospital will be footing the bill for the Lookouts $1 million annual lease payment.”

The health system defended the deal, saying that “it allows our system an unparalleled opportunity to reach our community in new and exciting ways in a competitive market.” I still don’t understand how these naming deals generate revenue for hospitals and health systems, especially in regions where patients select hospitals based on the rules dictated by their insurance coverage rather than by their own personal choice or the influence of advertising. If some of our readers have insight, feel free to educate me.

Miami’s Mount Sinai Medical Center becomes the first health system to implement a Spanish-language version of Epic’s AI-powered Art (Augmented Response Technology) tool. Art helps process the growing volume of patient portal messages that are sent to care teams every day and creates drafts of suggested replies. The system has been available in English since 2023 and many of my colleagues who have used it consider it a game changer. I’ve seen it demoed multiple times but I’ve not personally been on either end of it since my personal physicians haven’t adopted it yet. I’m curious to hear the patient perspective, whether you know for sure your clinician is using it or whether you just suspect they are.

image

People are talking about Doximity’s free GPT. I tried it once awhile back, but I can’t remember if I was impressed by it. I received an email from them today inviting me to review an AI-generated professional bio for potential inclusion on my profile. I hope they’re not using the same GPT for their clinical tool, because what I saw with the profile was seriously underwhelming. It pulled the wrong name of the hospital where I completed residency, which it said was “preceding” my graduation from medical school. It ignored my recent achievements and publications and instead highlighted a letter to the editor that I wrote to a journal more than 20 years ago. I clicked the “don’t add” button on the entire thing. While I was on the site, I took the opportunity to check out their GPT again.

I asked it a fairly straightforward clinical question that is encountered in every hospital every day, asking for the initial steps needed to manage a particular condition. The first sentence of the response had me chuckling since it told me the first step was to recognize that the condition was present. Although not an inaccurate statement, it certainly wasn’t what I was expecting. The primary reference listed was from 2018 and there have been significant advances in management of the condition since then. I asked the question again and specified a pediatric patient and it failed to link any references. Based on those factors, I can say that I’m officially underwhelmed.

image

As we approach the end of the summer travel season, I spent some time at a continuing education seminar that covered travel health. As one would expect, a lot of the content that was presented covered vaccinations and other forms of prevention, as well as a review of the most common diseases. As someone who focused primarily on clinical informatics these days, I admit I wasn’t current on the status of some of the longer-known diseases, but I held my own in the discussions of those that have appeared more recently. Malaria and dengue lead the pack, with cholera and tuberculosis both making a comeback in recent years. Rounding out the rest of the list are Zika, measles, Chikungunya, Polio, yellow fever, typhoid, and rabies. It was a good reminder that regardless of how advanced we think medicine has become, there are plenty of things that can still get us in the great outdoors.

Have you ever had a travel medicine consultation prior to a trip? Did you find it valuable? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 7/21/25

July 21, 2025 Dr. Jayne 1 Comment

Mr. H is running a poll that asks, “Is it ethical for doctors to prescribe the drugs of their pharma sponsors to people who seek specific treatments?” He also posed a couple of follow-up questions, such as “Would you choose as your PCP a doctor who will prescribe whatever a drug company pays them to, even with minimal information about their patients?” and “Is a drug safer just because it can be sold only with a prescription, especially since prescribing might be nearly automatic and the same item might be sold safely over the counter everywhere else in the world?”

I like the response choices that Mr. H included in the poll. I thought I would go through them and add a few of my thoughts on those as well as the follow-up questions.

No. The patient should see their regular doctor. As a primary care physician, I agree with this one in my heart. Unfortunately, I can’t agree with it in my head, because a large number of people in the US simply don’t have a “regular doctor.”

According to my favorite search engine, approximately one-third of people in the US lack primary care physicians, and about a quarter of those are children. Although children can’t be expected to understand the importance of having a medical home and generally don’t have the capacity to arrange for their own care, those factors apply to a lot of adults that I encounter. Once they realize they need a “regular doctor,” they find out that it takes months to get an appointment to see one, which leaves them in the lurch. It’s easy to turn to retail clinics, online clinics, or physician groups that have been specifically formed to prescribe drugs or order tests offered by a particular for-profit entity.

No, unless they review the patient’s medical records. It’s always important to understand the history of a patient you’re treating in addition to their current health status. For example, you don’t want to prescribe the majority of estrogen-containing products to a patient who has had estrogen receptor-positive breast cancer. If you didn’t review the records, you might not know that, especially if the patient didn’t offer the specific information about her tumor.

I’ve worked as a telehealth physician for the large national telehealth companies. Most of the time in those situations, you don’t have the patient’s records. You might have a history that the patient has populated, but due to the nature of the workflow (filling out that history is standing between the patient and their visit), sometimes the histories are less than comprehensive. Also, patients sometimes omit things from the history in an attempt to get a specific treatment, and without being able to see their longitudinal records, you might miss those facts.

No. It drives costs up for everyone. This response is currently scoring rather low, but it’s an important one. Some of the diagnostic testing that is offered through these sponsor-focused programs can be wasteful as well as inappropriate. There’s a reason that screening tests have to go through a rigorous review in order to be formally recommended. Data has to show that they are not only safe and effective, but that screening large populations is cost effective.

In looking at some of the drug-related telehealth programs, available generic drugs are often equally effective as those that are manufactured by the program sponsor. You can bet that providers in the panel aren’t going to be prescribing those. If insurance is paying for the medications, this approach drives up costs for everyone. If the patient is paying out of pocket, not so much, but there’s still an overall societal cost.

No. It’s a prescriber lawsuit waiting to happen. I’m a little on the fence about this one. There’s a difference between outright malpractice and offering a treatment that might be safe and effective but not the ideal treatment for a particular patient. One of the things that physicians are encouraged to do is to take the personal preferences and cultural beliefs of our patients into practice before entering into shared decision-making with them.

If that sounds like a mouthful, that’s because it is. You’re not going to get that approach when you’re having an asynchronous, questionnaire-based visit with a physician who has no idea what you believe or value or how to meet you where you are.

Yes. It’s legal and what patients want. I’m going to channel millions of parents of teenagers here. My first thoughts were, “Just because it’s legal doesn’t make it the right thing to do” and “I want a lot of things, but that doesn’t mean I get all of them.”

I’ve treated many patients who think they want something. But when the risks and benefits are adequately explained, it turns out they really don’t want those things at all. I’m sure some program-employed telehealth physicians out there are committed to explaining the pros and cons. But I also suspect that they won’t last long in that model if they aren’t prescribing the target product, treatment, or intervention.

Of course, this happens during in-person visits as well. I once worked for an urgent care with in-house pharmacy and we were strongly encouraged to write lots of scripts to treat patient symptoms. Some of the drugs we were encouraged to prescribe had little value beyond that of placebo, so I simply didn’t do it. Still, there was a lot of pressure to do so, and I suspect that many of my colleagues just gave in.

Not sure, but it’s puzzling that doctors do this. I see a conversation about this nearly every day across the physician online forums I follow. A lot of reasons are cited for working in these models. Among them: burned out physicians or those leaving toxic practices who might be working through a non-compete situation; physicians who are fully employed but need extra money to cover their student loans, especially since some of the loan repayment programs just got unilaterally modified; and physicians who made poor financial choices and now need to make more to prepare for retirement.

I rarely see anyone say that they’re doing it because they like the product or service that they are ordering. Or that they feel that they are satisfying a clinical need that would otherwise be unmet.

As for Mr. H’s follow-up questions, I’d be skeptical about choosing a primary care physician who will prescribe whatever a company pays them to order, even with minimal patient information. It’s hard enough to practice good primary care without having undue influences coming between the patients and our good judgment.

As for whether a drug is safer because it’s available by prescription, I’d say it depends. Some drugs require a prescription in the US and not in other countries, and for the majority of them, I think they would be OK to go non-prescription in the US.

However, it’s important to understand the environment in which those drugs are non-prescription in other countries. Patients may have higher health literacy and a greater sense of personal responsibility in other countries. Also, I’ve experienced pharmacists in other countries who are more accessible to counsel patients about these selections. 

Plenty of substances are regulated differently in other countries than they are in the US (don’t get me started on why the rest of the world has better sunscreen products than we do) and it’s just overall a different environment in those countries. Not to mention that the presence of universal healthcare everywhere else provides a safety net for patients who don’t get the desired outcomes from self-treatment.

It will be interesting to see the final poll results when they come in. Feel free to leave a comment when you vote on the poll, and as always, you are welcome to leave a comment here or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 7/17/25

July 17, 2025 Dr. Jayne 2 Comments

image 

It’s been one of those weeks where I’m pulled in so many directions I’m not sure which way I’m supposed to be going. Just when I think I’ve finished something, another obstacle turns up in my path and I have to swerve.

I’ve attended enough workplace resilience seminars over the years that pivoting from crisis to crisis seems second nature, even if it’s not fully in my comfort zone. Still, there’s something to be said for the excitement of doing the corporate equivalent of a “cattle guard jump” from time to time, so I’m happy to keep on keeping on.

From Universal Soldier: “Re: LLMs replacing physicians. What’s your take on projects like this?” Headlines abound for this kind of work, especially when the media talk about models achieving “diagnostic accuracy” or outperforming the average generalist physician.

The Microsoft AI Diagnostic Orchestrator (MAI-DxO) is claimed to deliver greater accuracy, but also to reduce diagnostic costs, compared to when physicians evaluate a patient. I don’t disagree with the fact that we need to figure out how to do workups efficiently and to improve cost savings, but I wonder about the ability to translate this work to bedside realities. Let’s inject some of the realities of the current state of medical practice into the model and see if it can come up with solutions.

We can add a medical assistant who is stuck in traffic and doesn’t arrive in time to room the first patient, increasing everyone’s anxiety level as the office tries to kick off a busy clinic session when they’re already behind before they even start. As the model suggests tests to order, let’s throw in some cost pressures when those interventions aren’t covered by insurance or the patient doesn’t have any sick time to cover their absence from work. Add in a narrow network that makes it nearly impossible to refer to a subspecialist even when it’s needed. Let’s add an influencer or two worth of medical misinformation to the mix. Now we’re getting closer to what it’s really like to be in practice.

It’s great to do tabletop exercises to see if we can make clinical reasoning better. But unless we’re also addressing all the other parts and pieces that make healthcare so messy, we’re not going to be able to make a tremendous difference. I would love to see an investigation of whether physicians can improve their clinical reasoning simply by having more time with the patient, or fewer interruptions when delivering care, reviewing test results, and formulating care plans.

I would also like people to start talking more seriously about how care is delivered in other countries, where  better clinical outcomes are achieved while spending less money. Maybe it’s just easier to talk about AI.

image

An informatics colleague asked me what I thought of the Sonu Band, which is a therapeutic wearable that promises “clinically proven, drug-free vibrational sound therapy” that has been proven to improve the symptoms of nasal allergies. The band is used in conjunction with the Sonu app, which works with the user’s smartphone to scan their face and combine it with voice analysis and a symptom report to personalize the therapy.

The company says that the facial scan produces skeletal data that is used to create a digital map of the sinuses. It then uses proprietary AI to calculate optimal resonant frequencies for treatment.

Having spent most of my life in the Midwest, I can attest that allergy and sinus symptoms seem to be nearly universal. I reached out to my favorite otolaryngologist for an opinion, and although he pronounced it “fascinating,” he hadn’t heard of it. If it works as well as the promotional materials say, I could imagine it flying off virtual shelves. If you’ve given it a whirl or seen it prescribed in your organization, we would love to learn more.

The American Academy of Family Physicians and its Family Practice Management journal recently reviewed some AI-enhanced mobile apps that target primary care physicians. This was the first time I had seen their SPPACES review criteria:

  • S – Source or developer of app.
  • P – Platforms available.
  • P – Pertinence to primary care practice.
  • A – Authoritativeness, accuracy, and currency of information.
  • C – Cost.
  • E – Ease of use.
  • S – Sponsor(s).

Even if you’re not in primary care (in which case you can feel free to omit the second “P”), this is a good way to encourage physicians to think about the sources of information they use in daily practice.

It’s not mentioned in the article, but the author also encourages physicians to be aware of whether their tools are HIPAA-compliant and whether they’re entering protected health information into third-party apps. He also mentioned that none of the apps reviewed are a substitute for physician judgment.

I would also consider adding an element to the “cost” criteria that encourages users to think about how the app is making money. People seem quick to overlook third parties that are monetizing user information, if they’re even aware of it happening at all.

I will use this as a teaching tool with students and residents, especially since they’re quick to download new apps without doing a critical review first.

I’m not sure how I missed this one, but OpenEvidence filed a complaint against Doximity last month, alleging that Doximity’s executives impersonated various physicians and used their NPI numbers to gain access beyond what they should have as lay people. Such activities are prevented by the OpenEvidence terms of use, assuming anyone actually reads them (they’re included in the complaint as Exhibit A if you’re interested).

The complaint alleges “brazen corporate espionage” and points out that Doximity “has built its brand on physician trust and privacy protection.” The defendants are alleged to have used prompt injection and prompt stealing to try to get at proprietary OpenEvidence code.

Pages 3 and 4 of the complaint describe a few examples of attacks in detail. The complaint notes that “this case presents the rare situation where defendants’ illicit motives and objectives are captured in their own words.” I always love reading a good court document and this one did not disappoint.

What do you think about corporate espionage? Can companies truly protect their intellectual property anymore? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 7/14/25

July 14, 2025 Dr. Jayne 3 Comments

There’s always a lot of buzz around wearables. The majority of US adults have a smartphone in their pocket or purse, so a treasure trove of data can be collected without adding a secondary device.

Most of the people I talk to have no idea how much information is being captured by the apps on their phones, let alone the types of entities to which vendors are selling their personal data. Nearly everyone I know leaves their location services on 24×7. About half the people I interact with, along with their families, use tracking apps to keep up with each other’s location.

An article in JAMA Network Open this week caught my eye with its title, “Passive Smartphone Sensors for Detecting Psychopathology.” The authors analyzed two weeks of smartphone data from 550 adult users see if “passively-sensed behavior” could identify particular psychopathology domains. They noted that this is important work because smartphones can continuously detect behavioral data in a relatively unobtrusive way.

They had two main objectives. First, to determine which domains of psychopathology can be identified using smartphone sensors. Second,  to look for markers for general impairment and specific transdiagnostic dimensions such as internalizing, detachment, disinhibition, antagonism, and thought disorder.

Data were pulled from global positioning systems, accelerometers, motion detection, battery status, call logs, and whether the screen was on or off.

The authors were able to link nearly all the domains with specific sensor-captured behaviors, creating “behavioral signatures” by measuring things like call volume, mobility, bedtime, and time at home. Specifically, they were able to link disinhibition with battery charge level and antagonism with call volume.

Based on the phone-related behaviors I observe, it would be interesting to see if my gut feeling about a user’s psychopathologic situation is accurate. I would also be curious to know if there is a difference in the data looking at other age groups that weren’t studied, such as teens or the elderly. Although the study was done on adults, the mean age was 38 with a standard deviation of 8.8, so there is certainly some opportunity to look in detail at other groups.

I was recently with a large group of individuals in their 70s. Their visible phone behavior would rank them right up there with the teenagers I know.

Reading about this made me think about all the data that companies are collecting now that they’re focusing on potentially eliminating remote work and ensuring high levels of productivity. There are plenty of stories out there about people using so-called “mouse jigglers” to make it look like they’re working so that their computers don’t go to sleep. Of course, companies that restrict what kinds of USB devices can be plugged in might be attuned to that, and there are also more sophisticated monitoring tools that also look at keyboard usage patterns and can detect if something shady is going on.

Remote work isn’t the only place people might be slacking off. I see plenty of people who have in-person jobs who constantly use their phones for potentially non-work activities. Many apps  might be adjunctive to job role and responsibilities, but I see a lot of online shopping and social media use as well.

I’d love to see some robust research that looks at communication and collaboration strategies within an organization to see which workers might thrive with one style more than another. I’ve worked in organizations that have documented communication plans that make it clear what kinds of work should be conducted using meetings, phone calls, email, instant messaging, and texting, but those kinds of policies are few and far between these days.

Even without a written policy, workplace culture defines how things get done, but when you’re a new person, a consultant, or a contractor, it can be difficult to try to figure that out unless someone clearly explains the rules of engagement.

I worked in one organization that basically used Slack as the connective tissue of the organization. I have to admit that I struggled there. Every time I asked where to find a resource, the answer was, “It’s in Slack,” but it didn’t seem like there was any rhyme or reason to how things were organized. More often than not, important documents were accessed through links within a message thread rather than being in a “files” area or in specific channels that made sense to those of us who were new.

A tremendous amount of work seemed to get done via direct messages rather than channels, making it even more difficult to find things. At one point, during a critical issue with a release, I had a separate cheat sheet of which conversations to look through when I needed certain kinds of information, since I had an endless list of direct message conversations with various combinations of the same group of people.

When I asked if there was any team- or company-level documentation on how it was all supposed to work, I felt like I was revealing myself as someone who simply couldn’t keep up. As a consultant, I had multiple conversations with leaders at the company about how this was working and how I had seen it contribute to process defect rates and rework. I also knew of plenty of examples at that company where people downloaded documents to their own hard drives so they could find things later, but who then ended up working off of outdated specifications since they were using local copies rather than shared ones. Not to mention that if people can’t find clear information, they are more likely to improvise or otherwise wing it, which is generally a bad idea when you’re building healthcare software.

If you could use data to find scenarios where someone was working on a deliverable – say, a slide deck or a document — and then spent 10 minutes rapidly flicking through various file structures or messaging platforms, opening and closing multiple documents, and doing web searches before finally returning to the document, it could be an indicator of disordered work patterns that might benefit from some kind of intervention.

If you see multiple people on a team with these work habits, that may be indicative of the need for a different kind of organizational structure for work product and other materials. I think those patterns are much more important to explore than knowing whether someone’s mouse is moving

What do you think about looking at smartphone or other device data to learn more about people’s behavior and the potential for psychopathology? Would having more information make things better or potentially make things worse? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 7/10/25

July 10, 2025 Dr. Jayne 3 Comments

image 

I finally had time to dig into the recent paper about the “accumulation of cognitive debt” that happens when using AI assistants.

As a proud member of Generation X, I first experienced those rites of passage called “the five-paragraph essay” and “writing a research paper” in middle school. My English teacher  — this was before everyone called it Language Arts or something else more inclusive — made us create a 3×5 index card for every reference. We had to have cards for every quote or idea we planned to use. For those of us whose brains were wired for reading and writing, it was a painful process. We just wanted to jump in and start writing. However, for others it was an exercise in organizing thoughts and making sure to have enough materials to support your conclusions.

Fast forward to my university days, when I was a teaching assistant for an English 101 “Thinking, Writing, and Research” class. Those pesky index cards were still recommended, although not required. Personal computers had just made their way into dorm rooms, but as I graded research essays, I could easily tell who knew how to organize their thoughts and who was simply phoning it in.

The professor I worked with always selected obscure topics for the assignments, so it was nearly impossible to copy the work of others. That made grading all those essays quite an adventure. This was the era when those with computers had to figure out how to best use them on an as-you-go basis, because there certainly weren’t any classes offered that explained the best ways to use various pieces of software. Subsequent generations always had access to computers for schoolwork, so I’m not sure how much of the process aspect of writing is still taught versus enabling people to just sit down at the keyboard and get to it.

Within that context, I started reading the paper. It looked at how three cohorts completed an essay writing task. LLM-only, search engine-only, and brain-only groups completed three writing tasks using their assigned method. They then had a fourth task where some of them were crossed to another group. The participants were monitored with electroencephalography (EEG) to assess the cognitive load during the tasks. Additionally, essays were assessed using natural language processing, scoring with the assistance of a human teacher and scoring by an AI judge.

The authors concluded that the brain-only group had the strongest brain connectivity, followed by the search engine group. The LLM group had the weakest connections. Additionally, participants in the LLM group had lower self-reported ownership of their essays and had difficulty quoting their own work. Ongoing analysis showed that “LLM users consistently underperformed at neural, linguistic, and behavioral levels.”

The authors commented, “These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning.” Given some of the personal statements that I’ve read for medical students over the last two years, there’s so much LLM use that it’s hard to get a feel for who the candidates really are as people. Maybe this research will convince folks to dial it back a bit.

image

I enjoy learning about new players on the healthcare IT scene. One that I’ve been watching in recent months is CognomIQ. The company’s semantic-based data management solution has been optimized for healthcare, in particular for research institutions.

The company originally caught my eye when I heard that industry veteran Bonny Roberts had joined the team as VP of customer success. She’s a long time HIStalk fan and served as co-host of the final HIStalkapalooza back in the day. I trust her to recognize the real thing.

The company’s CTO, Eric Snyder, can discuss the importance of data without succumbing to industry buzzwords or getting bogged down in jargon. He recently delivered a guest lecture for a data and visualizations class at the University of Rochester. He followed it up with a social media post on data literacy and the problems that happen when different parts of the healthcare system describe parts of the care continuum in different terms.

My favorite quote: “I struggle with the answer to the data literacy in healthcare problem because it’s like creating a second floor of a house when the first floor is propped up on sticks. We never solidified the foundation as an industry, instead we moved on to AI.”

I wish more people in the industry understood this way of thinking. I would even go a step farther to say that we’ve built a house of cards and now we’re putting AI on top of it, but I’m trying to be less cynical. Those of us on the patient care front lines have spent the last quarter century creating a tremendous volume of patient-related data that is just floating around and isn’t helping organizations reach their potential. I think of all the wasted hours of clinicians clicking and the back-end systems being unable to do anything useful with the data because of  lack of standardization or inconsistent standards.

Snyder has spent the better part of the last decade leading technology innovation work at the Wilmot Cancer Institute and understands the importance of data to solve complex problems. The platform can aggregate hundreds of data sources and transform it in an automated fashion, which sounds awfully attractive to those of us who have had to engage in weeks or even months of cleanup prior to embarking on reporting or research efforts.

I also have to give a shout out to the company’s CEO, Ted Lindsley, whose LinkedIn profile boasts, “Healthcare Data that doesn’t suck.” Honestly, seeing that made my little informatics heart go pitter-patter, because it’s incredibly refreshing to see someone who is excited about what they do and is ready to express it in no uncertain terms.

I reached out to Ted to learn more. He was willing to entertain my anonymous inquiries. Recent highlights include the company coming out of stealth mode, showcasing its work at the recent Cancer Center Informatics Society Summit, and announcing its seed round. He had some great analogies about technology leaping forward and had me laughing about moving from MS-DOS and Windows 3.1 to Windows 95, even though my ability to talk about that transformation likely betrayed my age. He’s certainly no stranger to the work that needed to give the industry a kick in the pants and get it moving ahead. I’m looking forward to seeing where CognomIQ goes this year and beyond.

image

The last couple of weeks have been pretty exhausting and free time has been scarce, so I had to rely on an AI-generated cake in celebration of this being my 1500th post. I was hoping to whisk myself to a beach to celebrate, but instead I have to make it through another major upgrade first. When I was a young medical student sitting down at a green-screen terminal to access lab results, I never imagined writing about my experiences with healthcare IT, let alone there being people who would read it on a regular basis. Thanks for supporting my work, and a special thank you to those readers who share their comments and ideas so I can keep the words flowing.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 7/7/25

July 7, 2025 Dr. Jayne 1 Comment

image 

I’ve spent a lot of my career working on the “softer” side of clinical informatics, such as change management, governance, adoption, and optimization. Although I’ve implemented a couple of technologies in my career that have been dramatic, most of the time I’m working on projects that are a little more subtle.

I’m appreciative of projects like that when I have to gain buy-in from difficult stakeholders. When they don’t feel like you’re yanking the carpet out from under them, they are more likely to align with the goals and objectives. On the other hand, sometimes when projects are too low-key they’re not perceived as valuable. It’s a fine line that has to be walked.

I can’t even count the number of practices where I’ve helped implement EHRs over the years. I’ve worked with people ranging from those who have never used computers prior to the EHR to those who have been using them since birth.

In the early days of EHR, people used to talk about the “older” physicians being resistant. Fortunately, I had a good story to counter that after meeting a curmudgeonly colleague who informed me that he had been “advocating for electronic charting since long before you were born, young lady.” He and I actually competed for the first EHR-related role in our health system. I think he was a little grumpy that he didn’t get the position. I grew to appreciate his point of view as he pushed back on some of the things we were trying to do, because he always wanted to make things just a little bit better.

I’ve also worked with younger physicians who were incredibly resistant to adopting technology, particularly anything other than the one that they personally felt was the best. There’s nothing quite as entertaining as watching an Apple devotee argue with the IT team about how he absolutely, positively cannot use the PCs that are present in every shared workspace in the hospital. Folks like that were especially fun during the early days of “bring your own device” programs. They demanded to be able to use hardware that didn’t comply with the published standards.

I’ve worked with ER physicians who complained about how long it took them to do their charts, yet were found to be spending a good chunk of their day on the Zappos website. 

These examples show how differing perspectives and experiences can have a tremendous impact on the success of a project. In turn, how those outcomes can ultimately influence the patient experience. When you have one physician in a practice who refuses to do the recommended workflow, it can cause extra work for the staff. It can also result in confusion and delays for patients who are waiting for their results or for a response from the physician.

I’ve long wondered what makes one person think a new solution is awesome and another one thinks it’s awful when they are doing the same work and caring for the same patients. An informatics colleague and I were talking about this over a recent round of cocktails. She brought up a recent study from the Proceedings of the National Academy of Sciences that looked at how different people perceive works of art.

Although I lived with an art history major for a number of years, I hadn’t heard of the concept of the “Beholder’s Share,” where a portion of a work of art is created by the memories and associations of the person viewing or experiencing it. I suppose it’s a more academic rendering of the idea that beauty is in the eye of the beholder.

The researchers behind the article employed high tech means to look at it, however, using functional MRI (fMRI) imaging to identify how people used their brains differently when viewing different types of art. Apparently abstract art results in more person-specific activity patterns, where realistic art delivers lass variable patterns. They also noted activity in different parts of the brain when looking at abstract art. 

I’d love to see how different end user brains would react to differences in EHR screens and workflows. Maybe we could use that information to better predict how users will perform with different tools. Instead of looking at a subject’s brain activity while looking at a Mondrian painting, as the study did, we could see how their brains perform when confronted with different user interface paradigms.

I’ve seen EHR and clinical solution designs over the years that were jarring in color or layout. I’ve seen those that were so vanilla that nothing seemed to catch the user’s attention.

Another concept in the art world is that of shared taste. It explains why some groups of people prefer the same things, where others might find them objectionable. People typically know if they prefer art from classical times, the Renaissance, the Impressionists, or from abstract or modern artists, I would bet that we can create groupings around different types of clinical data visualization and how they can best be used in patient care.

Similarly, I would be interested to see if users who have certain sentiments about a given piece of technology can be grouped in a particular way, such as by specialty, user demographics, location, or tone of the program where they completed their training. Similar to the concept of precision medicine, I wonder if we could use that information to create precision training or a precision technology adoption curriculum that could help users adapt to new tools that end up in their workflows.

Even without the expense and risk of something like fMRI scans, I would bet that we could do a lot in clinical informatics to better understand our users and the learners with whom we are engaging. I’ve seen quite a few surveys that ask new employees about their experience with electronic documentation or technology in general, but they are fairly superficial. They usually have questions like, “Which of the following systems have you used?” with a list of vendor names. They don’t recognize if the user was on a heavily customized version or an out-of-the-box configuration. Most users wouldn’t know anyway unless they have experience behind the informatics curtain. 

Institutions have come a long way recognizing different learning styles and whether people prefer classroom, asynchronous, or hybrid learning methods. I don’t doubt that the training and adoption efforts that we see today might be supplanted by other paradigms in the future.

Is the beauty of the EHR in the mind of the beholder, or is it something with which users simply have to cope? Is one platform more abstract than the other? Will we ever see an EHR with a classical sense of style? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 7/3/25

July 3, 2025 Dr. Jayne 3 Comments

clip_image001 

I’m not a superfan of The Joint Commission, but I was interested to see their press release about partnering with the Coalition for Health AI (CHAI) to create AI best practices for the US healthcare system. The partnership plans to develop AI tools, playbooks, and it wouldn’t be The Joint Commission without a certification program as one of the offerings.

If anyone wants to lay odds on the cost of such a program, I’m happy to run the betting pool. Initial guidance will be issued in the fall, with AI certification to follow. I’ve done consulting work around patient-centered medical home recognition, EHR certification, and other compliance-type efforts, so I’ll be looking for the devil in the details as they are released.

image

As a primary care physician at heart, I’m sensitive to the multitude of recommendations that we give to our patients, often all at one time. For example, a patient who is newly diagnosed with diabetes may need to have labs drawn, see a diabetic educator, visit an ophthalmologist, consult with a podiatrist, and manage prescriptions from a retail pharmacy and a mail order pharmacy. Health systems are investing in solutions to reach patients via patient portal, text, interactive voice calls, paper mail, and email, which has resulted in patients being overwhelmed. I’m intrigued by Lirio’s concept of “Precision Nudging” (they have trademarked the term) to help manage this problem.

AI is involved via their large behavior model that aims to use elements of behavioral science along the way. It pulls together engagement and outcomes data with consumer understanding to identify the most appropriate channel to reach a given patient. Interventions are modified based on patient response and are tweaked along the way.

I have followed other companies like this over time, but Lirio seems to get it better than others, going beyond vague concepts like “wellness” and “engagement” to actually talk about specific screening programs and revenue-generating interventions that can boost patient quality and deliver a solid return on investment. They do have a bit of a revenue cycle background, so I’m sure that helps.

I was also geeked to learn that the company’s name actually has meaning rather than being something that either just sounded good or hadn’t been registered yet, as one commonly sees in younger companies. It’s actually named after Liriodendron tulipifera (the tulip tree), which apparently is the state tree of Tennessee. Props to the marketing team for its use of the phrase “lustrous branchlets” to describe the company’s strengths. This wordsmith salutes you.

Mr. H already mentioned this, but I wasn’t surprised to see that Best Buy has sold Current Health, returning the company to its former CEO and co-founder. A Best Buy executive said that growing its home care business has “been harder and taken longer to develop than we initially thought.”

I can understand that given the performance of their booth team at HIMSS25. On one of my booth crawls, my companions and I stood in their large booth for probably 5-7 minutes chatting before anyone approached us, despite there being multiple employees in the booth staring at their phones. I didn’t mind it too much because we were enjoying their extra-thick carpet, but if they were looking to capture leads, they were falling down on the job. Once a rep finally approached, the conversation was passable, but negative first impressions are hard to undo.

As much as I think I’m with it as far as keeping up with healthcare IT news and trends, I still rely on HIStalk for information on a regular basis. There’s always some tidbit that I haven’t gotten to yet, which is not surprising given the calamitous state of my inbox these days. HIStalk was the first place I learned about the new CMS prior authorization program for traditional Medicare. I’m all for catching bad actors, such as the durable medical equipment companies that cold-call patients offering knee braces and other questionable interventions, then rely on relatively clueless physicians who have rented out their medical licenses to enable a high-volume prescription mill situation.

However, I feel like the majority of physicians caring for our nation’s seniors aren’t committing fraud. They are negotiating the complex interplay between evidence-based medicine, the costs of various treatments, and patient beliefs and preferences. Sometimes the “best” treatment is unaffordable for a given patient, or you’re working with patients who can barely afford food, let alone their medications.

They’re going after specific procedures, including knee arthroscopy for arthritis, along with skin and tissue substitutes and nerve stimulator implants. You know what else would help reduce these unneeded procedures? Greater health literacy and patient education campaigns, which are parts of public health that we continue to neglect in this country. Hopefully the program will remain with these high-dollar, low-benefit procedures and won’t creep into primary care on the whole.

Given the amount of data that CMS has on every prescriber’s habits, they should be able to hire some clinical informatics folks to find those who are practicing inappropriately and go after them rather than putting processes in place that annoy those who are trying to do the right thing.

image

I recently had a rough travel day with significant delays. As I was waiting for my inbound aircraft to arrive, I noticed two fire trucks pull up on the tarmac. They did a quick test that I recognized as preparing to deliver a water salute. I’ve seen it for Honor Flights that were returning to the airport and for a pilot retirement.

Since the airport was small, I could see my inbound plane taxiing at a slow speed, which was unusual given the airline’s propensity to get planes to the gate quickly, especially after delays. A few minutes later, a Marine Corps Honor Guard arrived and I realized this flight was carrying a deceased service member. The waiting passengers in the terminal gradually fell silent and stood to show their respect, with hardly anyone moving until the transfer was complete. It was a sobering reminder that no matter how bad I felt my day was, steps away from me was a family that was having one of the worst days of their lives.

As we approach the Independence Day holiday, I’m grateful for everyone who has put on a uniform and sworn an oath to protect and defend our country. Freedom comes at a high price. Thank you to all current and former service members and their families for being willing to make that sacrifice.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 6/30/25

June 30, 2025 Dr. Jayne 2 Comments

The Journal of Graduate Medical Education published a thought-provoking article this week titled “A Eulogy for the Primary Care Physician.” It reflects on the original purpose of a primary care physician as the trusted physician “who knows their health inside and out, who guides them through the complexities of the medical system, and who fosters relationships not with charts, but with people.”

This is exactly the kind of old-timey family physician that many of my peers and I thought we would become. That’s what we were trained to be during our residency programs. Little did we know that the forces that would actually align against our being able to do that.

First, there were the turf wars. I was trained to perform a variety of procedures during residency, including minor office-based surgeries, biopsies, wound management and repair, and sigmoidoscopy. I was also trained to deliver prenatal care and perform non-operative deliveries in partnership with local OB/GYNs who served as backup.

I quickly found that I wasn’t able to do most of those things in my hospital-sponsored practice. Family physicians weren’t allowed obstetric privileges, full stop, even if we had an OB/GYN who agreed to back us up. One of the hospitals where I was forced to be on staff didn’t even have obstetrics, which somewhat limited my ability to recruit newborns to the practice.

After six months of appeals, I was allowed to seek newborn nursery privileges at a competitor hospital in an attempt to maintain that part of my skillset, although caring for infants became increasingly rare.

Second was the pressure for primary care to support the volumes of all of the other specialties. If there was a procedure to be had, I was expected to send those patients to my proceduralist colleagues so that they would have adequate volumes.

Numerous procedures can be done by appropriately trained primary care physicians in a high-quality and cost effective manner. However, I was told that it was unseemly to hoard those procedures, and I needed to refer them out and show that I was a team player. It didn’t matter that patients would prefer not having to make a second appointment, take off work again, or pay a second co-pay.

The only thing I was able to hang onto were the skin biopsies, because I could do them relatively quickly and they didn’t have a significant supply need or cost therefore they were somewhat “invisible” to the medical group administrators who actually ran the show.

There were a hundred other things that steered my work as a family physician in a different direction from what I thought it would be. When I was offered the opportunity to work with the electronic health record project, I jumped at it. Maybe that would be the answer to regaining autonomy since I would be able to run reports and see data on my work without external support. Previously, I had to rely on the business office to do so via our green-screen practice management system.

Because of my protected time to work with the EHR, I was somewhat buffered by the pressures to constantly see more patients, although I was still juggling dozens of patient messages and requests on the days when I wasn’t in the office. In hindsight, I probably worked 1.25 FTEs during that time, despite being paid as a 1.0 FTE, but I was the only person in my position and I didn’t know how to push back given the pressures that were on the other primary care physicians in my group and which seemed worse at the time.

Although the Eulogy article cites burnout, declining reimbursement, and private equity as significant contributors to the demise of the primary care physician, I would add other elements. The consumerization of healthcare continues to be a major force, as physicians are incentivized around patient satisfaction, sometimes to the detriment of quality of care.

As an example, two areas on which physicians are incentivized are patient satisfaction and avoidance of unnecessary antibiotics. For every patient who calls wanting a Z-Pak for what is undoubtedly a viral illness but who “wants to get ahead of it” or says “I know my body and what I need,” there is only a lose-lose situation. I’ve been roasted via online review sites for refusing to call in antibiotics without seeing a patient. I’ve been threatened with complaints to the state board. I’ve been ripped in Press Ganey surveys.

My quality numbers remained high, but when you get bad reviews (justified or not), your paycheck suffers. Physicians should not be placed in these crosshairs, but we do it every day. I know it’s the proverbial dead horse, but educating patients about the risks of unwarranted antibiotic prescriptions is another public health intervention at which we’re not very good.

When I had the opportunity to expand my informatics work and change to a different environment for patient care, it was bittersweet. Although I missed the regular “continuity” patients with whom I had bonded over five years, I was glad to get out from under all the patient portal messages and communications that didn’t stop while I was out implementing the EHR, training peers who refused to work with non-physician trainers, and trying to figure out our group’s strategy for health information exchange.

I thought that would be the death of my career as a primary care physician, but little did I know that once I started working in the emergency department and urgent care settings, more than half of my work would be primary care anyway, since many of our community used those environments for their primary care services.

The Eulogy states, “The PCP is survived by the independent physician assistant, nurse practitioner, and generative artificial intelligence.” As someone who is starting to have more encounters with the patient side of the healthcare system than I would like, I worry quite seriously about how my generation will be cared for in the future.

Every time I see my own primary care physician, who is a few years older than I am, I don’t leave without asking the question of when he sees himself retiring so that I’m not caught in the lurch. Fortunately, most of my subspecialist physicians are younger than I am, so I’m less worried in those areas.

With regard to generative AI replacing primary care, I think we have many years of it augmenting rather than replacing. I’ve been unimpressed by many of the solutions that I’ve seen. I hope clinicians remain skeptical as developers work through issues with quality.

What do you think about the death of primary care in the US and how healthcare information technology might be able to resurrect it? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 6/26/25

June 26, 2025 Dr. Jayne 4 Comments

The two hot topics around the virtual physician lounge this week, not surprisingly, involved comments or policies from the new Secretary of Health and Human Services.

Physicians are dreading upcoming changes in vaccine reimbursement, which will likely come if vaccines are no longer recommended by the reconstituted Advisory Committee on Immunization Practices. Commercial payers follow the actions of government payers, which could lead to a fair amount of work for everyone with an EHR and billing system as they reconfigure their systems to follow new rules.

I can also already see the new policies fueling burnout in the organizations that I’m closest to, as physicians again have to defend their practice of evidence-based medicine. It’s not a great time to be a frontline physician, particularly one in primary care.

The other hot topic was around statements that everyone in the US should have a wearable medical device in the next four years. Continuous glucose monitoring devices appear to be the darling of the day, with much skepticism from physicians who have already had to deal with the data provided by current users. There’s not a lot of data that supports the use of the devices unless a person is diabetic, prediabetic, or has one of a handful of other medical conditions.

Just wearing a device doesn’t drive the needle, either. Other services are needed to support patients as they make changes in their health, such as dietitians, nutrition counseling, and behavioral health interventions. Those also have a cost.

It’s great to say that people should take charge of their own health, but for those of us who have been in the public health trenches for decades, we know that patients can’t always control the fact that they live in a food desert. They can’t control their genetics. We live in a nation where health literacy is close to rock bottom.

Kennedy stated, “They can see, as you know, what food is doing to their glucose levels, their heart rates and a number of other metrics as they eat it, and they can begin to make good judgments about their diet, about their physical activity, about the way that they live their lives.” Having cared for thousands of average Americans during my medical career, I would hypothesize that less than a quarter of the patients I’ve seen would be able to take a device out of the package start managing their diet in the way that he describes without some serious intervention.

I don’t disagree that the nation needs a crash course in self-care. I’ve seen it myself as patients come to the emergency department for the common cold without having taken so much as an acetaminophen tablet. We see wounds that haven’t been washed with soap and water and sprains that were never iced, among other things, as the patients come to us and leave with a $1,000 hospital bill.

I would much rather see the country start pouring money into public health interventions that have been proven effective than see us throwing money at technology without all the ancillary services needed to truly drive the needle for patient outcomes. The nation’s many Federally Qualified Health Centers know a thing or two about this, as do the many county, city, and state health departments.

image

Speaking of high tech public health, I continue to be fascinated by the data and analytics around using wastewater for disease monitoring. I first heard of it during the height of the COVID pandemic, when it was being used to model the level of disease in our community, especially when we were having shortages of testing supplies for use in the office.

WastewaterSCAN is a nationwide system that is based at Stanford University in partnership with Emory University. It monitors 11 infectious disease indicators via bottles of wastewater that are shipped from around the country. Viral RNA material is sturdy stuff, and the diseases tracked include COVID, influenza A and B, respiratory syncytial virus (RSV), enterovirus, norovirus, hepatitis A, and many more. CDC also conducts testing, as do many municipalities, and most of the data is publicly available.

The higher-level information that is gleaned from wastewater can help public health agencies and care delivery organizations understand what viruses are surging in their communities and might be useful for creating recommendations on when to start testing for various diseases. It can also be used to inform staffing levels within facilities or to pinpoint increases that might result in an outbreak, such as high levels of a virus at an airport or tourist spot.

The next frontier in wastewater research involves identifying bacteria, although the process poses challenges. You can have two bacterial species that are similar, but only one causes a serious disease, and it can be difficult to differentiate between the two. Here’s to seeing what the next decade of innovation brings in this field, and to knowing that many of us are contributing to scientific advancement with every flush.

image

For some time now, I’ve been trying to clean up my provider data, including my NPI registration that still lists an address where I haven’t worked in a very long time. There are also problems with my CMS data, and the process to try to correct that is even worse.

In a typical employed provider situation, an office manager or administrator usually manages that for the employed physicians. But when you’re a 1099 contractor working for multiple organizations it’s up to the physician to make it work. The governmental data is particularly pesky, because it feels like you have to get a log in to site A that then gives you access to site B but the passwords time out and are a pain to reset. If anyone has a cheat sheet for cutting through all of this, feel free to send it my way. My current clinical situation tells me I’m on my own. Maybe someone could create an AI-powered bot that could take care of it all.

image

I was back on the patient side of the equation this week and was surprised by the speed at which I received a message that I had “new test results” in the chart. Once I made it through some pesky password issues that involved using my laptop instead of my phone, I was disappointed to find that the “result” was simply a message from the lab that said my specimen had been received. It’s been a long time since I’ve done lab interface work, but I would hope that such messages might be filtered to avoid causing extra anxiety for patients.

I was also disappointed by the quality of the visit notes that I could see. I was weighed during my visit, but my height was not measured. However, the note had a documented height but no weight, although a BMI was there. The combination of vitals showing up in the note seemed odd. The note from the first physician that saw me had no fewer than six exam findings that were most certainly not examined. Although one could blame templated documentation, there really is no excuse. If you’re not doing an ear, nose, and throat exam on every patient, it takes about 12 seconds to remove that from your template forever.

When you’re a patient, what’s your biggest frustration with healthcare information technology? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 6/23/25

June 23, 2025 Dr. Jayne Comments Off on Curbside Consult with Dr. Jayne 6/23/25

You really don’t know how much you rely on certain technologies until they’re not available.

At one of our local hospitals, a PACS upgrade during daytime hours threw quite a few clinicians for a loop. I don’t think the IT teams really understood how important muscle memory is for clinicians who are trying to work efficiently in the EHR while seeing patients. Although a workaround was provided, it required physicians to go to a different part of the EHR to view images.

It sounds like some users had security issues and weren’t able to do their work from the new location, which caused frustration that was made worse by long wait times when they called the help desk. Even for those who were able to use the new link to access images, there were complaints that it took half the shift to get used to the new workflow. Later in the evening, it reverted back, which required another shift.

I’ve done plenty of upgrades in my career and I’m not sure what would be happening behind the scenes that would justify doing an upgrade during daytime hours. Most of the upgrades I’ve been involved in were conducted overnight so that they caused minimal impact to clinical workflows.

Based on the fact that nearly all of the IT decisions I’m seeing lately are made with significant attention to cost, I can hypothesize that it likely played a role. Still, I wonder if the people looking at that cost-benefit equation looked beyond the IT resources to include the cost for clinician inefficiency and the risk of clinical quality issues.

A colleague shared the downtime notification with me because they knew I wouldn’t believe it otherwise. I was surprised to see that it included mention of another clinical system that was being taken down from midnight to 2 a.m. the following weekend, so I’m sure there was some reason that this one was being done during peak hours.

If I had been on the leadership team that approved the communication, I would have recommended a mention of why we were doing the upgrade during the day. Users would at least understand that we had thought about them and were forced by extreme circumstances to do it that way.

I also was a fan of running our communications past people in different settings before finalizing them — including academic physicians, hospitalists, and community physicians — to make sure that we were covering all perspectives.

Just out of curiosity, I looked back through some communications from one of my hospitals to see if I could identify patterns from the biweekly newsletters. I was surprised to see that the newsletter had the same top blurb over a six-week period without any changes, which to me would create a risk for people ignoring the newsletter because they may have felt like they had already seen the materials.

I also noticed that over the last six months, the newsletter had become a compilation of unrelated blurbs rather than a more cohesive document. In the current version, each entry had different font and color schemes, including color choices that don’t meet accessibility guidelines for colorblindness. It also looks like it’s in a different order every time, with no standard formatting.

I would think that adding a framework to it might be useful so that people can quickly identify the items that are important to their work. Maybe start with a section for global updates that impact everyone, then move to updates by specialty, care setting, or a host of other categories that would keep people from having to wade through tons of irrelevant information.

I thought about offering some feedback (after all, I’m still a dues-paying member of the medical staff) but there wasn’t any information in the newsletter about who to contact if you have questions. I’ll just stay in the back row with my “Courtesy/Non-Admitting” privileges and hope I don’t have to look at any patient charts any time soon.

image 

I have several major presentations coming up. For once, my week wasn’t completely full of back-to-back meetings. I decided to do some personal development while I was creating the slide decks and see what AI has to offer.

I try to make my slides as non-wordy as possible, often choosing images that tell a story, or images that prompt me to talk about certain content rather than having too many formal text elements on the slide. I always create an outline-style summary first, so it seemed ideal to be able to take that outline and hit it with some AI and maybe save a little time. I tend to be a little stuck in my ways about backgrounds and formatting, so I was looking forward to spicing things up a little bit.

Unfortunately, what my AI friend came up with was entirely unusable. Not only did it just drop the outline into slides in a somewhat disjointed fashion, but the backgrounds it selected bloated a 25-slide deck up to over 80 MB in size. I could see that being possible if I were incorporating high-resolution radiology images or something like that, but this was just from backgrounds and non-critical design elements.

I guess I’m back to creating my presentations in the old-school way, at least until I have time to research whether there is some other way to use the tools differently, or until one of the savvy college interns agrees to give me a quick tutorial on how to not wind up in that place again. When I finished that slide deck in my usual way, it ended up well below 2 MB, so I’m still not sure what happened the first time around.

One of the presentations I was creating was for first-year medical students, introducing them to clinical informatics and explaining the kind of work done by physicians in this space. The incoming students are coming into an educational environment that’s so different from where I trained, and I have to say that I envy them a little bit. Here’s to hoping that I don’t wind up being talked about as someone who was out of touch or uninteresting. Fortunately, my session is a lunchtime one with free food, so I don’t think attendance will be a problem.

If you could go back in time to when you were first learning in your field, what do you wish you had done differently? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 6/19/25

June 19, 2025 Dr. Jayne 5 Comments

image 

As a consultant working with care delivery organizations, I see many of them using “access” as some kind of a performance mantra. Whether it’s access to book a visit with a physician in an office or access to the emergency department, there is constant pressure to make sure patients are formally scheduled for some kind of revenue-generating service with the organization.

I was recently part of a discussion with other physicians who were talking about how access is being conflated with value. One example was the push for patients to book a visit with a provider, without giving full consideration to whether the provider had the correct experience and knowledge to actually treat the patient. It doesn’t matter if you get the patient in quickly, but to the wrong office since you’re ultimately going to have to book a second appointment elsewhere to meet their needs.

Another example was the boom in patient portal messages. Patients can reach their physicians quickly, but that’s not helpful when it causes providers to be burned out and creates risk that patients won’t receive the correct treatments because someone is trying to read between the lines of a series of message exchanges to create a diagnosis and treatment plan rather than having a direct conversation with a patient (either in person or via virtual care).

Another physician mentioned secure texting, which creates a staff access problem “where it’s easy to just fling messages out there rather than thinking through what you’re really asking. It seems like people formulated their questions better when they knew they had to make a phone call.” There may have been cocktails involved in this discussion, leading one of my colleagues to ponder the fact that that “patients have access to their notes, but they’re useless when the notes suck.”

We often look at ways to use technology to create more access, but these comments remind us that there might be “good” kinds of access along with those that are less desirable. I’m hoping that someone might read this and think it through the next time they’re in a meeting pushing for increased access. It’s not just about getting bodies through the door, messages to the provider, or notes to the patient. We need to get to a point where greater access is providing greater value and driving patient outcomes. Otherwise, it’s just a buzzword.

From Navy Fan: “Re: remote work. I’ve enjoyed being a remote worker for 15 years now and I hate seeing people mess it up for the rest of us. Did you see the story about Sentara Health, where remote workers accessed patient information using false identities?” I hadn’t seen it before a reader highlighted it, which reminds me how much we appreciate our readers when they bring us a good story. Apparently, the system hired remote workers to manage lab requisitions, but eventually discovered that they were not based in the US and may have been misrepresenting their identities. The situation impacted patients who had lab tests performed between January and April of this year. The bad actors had access to plenty of protected health information, including names, dates of birth, and Social Security numbers. A manager became concerned in early April when they noticed that the workers attending virtual department meetings did not match the photos that were submitted during hiring. Sentara Health is offering free credit monitoring and identity protection services.

I wanted to add my two cents to some of Mr. H’s comments earlier this week about virtual care prescribing of ADHD medications. He mentioned a study done at Massachusetts General Hospital that showed that at least with their virtual care model, there was not an increased risk of addiction in patients receiving stimulant medications. Mr. H noted that the findings don’t necessarily apply to freestanding telehealth companies that have been accused of cranking out prescriptions, especially those that are investor-backed startups where clinicians are paid on a per-visit basis.

Although I haven’t treated ADHD via telehealth, I’ve worked for several different freestanding telehealth companies and the pressure to prescribe is real. Large percentages of providers working for some of the big firms are 1099 contractors and some of them are trying to complete visits every three or four minutes, which means they’re not doing a detailed visit with the patient. Some of the companies are focused on patient satisfaction metrics, which means that if you don’t give the patients exactly what they request, you’re going to receive scrutiny due to your perceived poor performance. Some in-person organizations are hype- focused on the same metrics and place similar pressure on their physicians, but the risk is much lower with in-person care because you can do an actual examination and can leverage your care team to ensure you have a more comprehensive history from the patient.

Bad news for those of us that like a good nap: a recent research article showed that certain kinds of daytime napping are tied to an increased risk of death in middle- to older-aged adults. The study looked at 86,000 non-shift workers. Those who took longer naps, had high variability in the duration of their naps, and who took more naps around noon or early afternoon were those most impacted. One of the takeaways from the study is that physicians should be asking not only about sleep habits, but specifically about daytime napping. Given all the other data-driven recommendations, I don’t see this one being added to the formal recommendation set anytime soon.

My best time for napping is around 3 or 4 p.m. when my energy is fading and I just need a break. Conference calls during those times are the worst, but sometimes they’re unavoidable for me since I work in all of the US time zones. Based on the data, I should be able to mitigate my risk somewhat by taking consistent short naps in the late afternoon. That seems like a much more enjoyable option than some of the other things I can do to reduce my risk of all-cause mortality, especially since I’m already doing most of them.

What’s your favorite time and place for a nap? Do you like a hammock on the beach, or are you one of the folks I spotted catching a few winks on a park bench after leaving the local winery? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 6/16/25

June 16, 2025 Dr. Jayne Comments Off on Curbside Consult with Dr. Jayne 6/16/25

Healthcare isn’t the only industry grappling with how AI should, or should not, fit into our daily work.

Some friends who are teachers sent me the transcript of a recent discussion about how AI is impacting the ability of humans to think and whether it will alter our abilities for critical thinking. The discussion linked to an article “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” that was a great read. The author set out to examine how AI tool use relates to critical thinking skills and focused on the concept of cognitive offloading as a potential mediating factor. Cognitive offloading happens when thought processes are outsourced to technology instead of being developed independently.

The study found that higher AI tool use had a negative impact on critical thinking abilities. Younger study participants (ages 17-25) were more dependent on AI tools and had lower critical thinking scores compared to those study participants who were older than 46 years. It also noted that regardless of AI usage, better critical thinking skills were associated with higher educational attainment, which should be important to anyone who has a stake in ensuring a well-educated population. The study found that higher educated people maintained those critical thinking skills even when using AI, which supports the idea that how we are using AI is more important than whether we’re using it or not. The study also found that AI use encourages passive learning, where students consume information rather than creating it.

The study had multiple hypotheses about the role of cognitive offloading, including one that suggested that moving thinking tasks to external tools would reduce the cognitive burden on individuals. Instead, they found that the reduced cognitive load can lead to reduced critical engagement and cognitive analysis. According to the author, this phenomenon has been described as the “Google effect,” where being able to easily find information online leads to reduced memory retention and problem solving skills.

That would seem to go along with what many of us already think, which is that the internet is making us dumber. Although to truly explore that statement, you would also have to look at the proliferation of TikTok videos and the nonsense seen all over social media on a daily basis.

I had the chance to speak to a couple of teachers who were blissfully enjoying their summer vacation, so I figured I would ask about their thoughts around AI and their thoughts about how it was impacting education, beyond the obvious concerns about AI-generated work.

One said that plagiarism has always been an issue, and taking from AI sources isn’t a lot different than taking from other authors, although AI might be easier to catch because of stilted language that would have been caught by editors of more traditional sources. She also noted that she’s applying some of her existing “how to spot fake news” lesson plan content to AI, encouraging students to be skeptical about what AI is telling them, to ask about bias, and to consult multiple sources to ensure accuracy. She recommends that students do their best to answer questions in more traditional ways first, then use AI to validate their findings.

The other teacher felt that better education is needed on how AI works and the risks of using it. He likened it to when GPS units first came out, and there were reports of people driving off the edges of roads that were closed because they were blindly following the GPS and not paying appropriate attention to their surroundings. He also noted that although there are certainly concerns about AI use interfering with academic rigor, he is more worried about his teenage students being emotionally harmed by AI-generated content, such as deepfake photos or videos.

He noted, “When I was in school, people spread rumors, but now you can have altered videos going around that are a lot more difficult to combat.” As a proud member of Generation X, I don’t envy the students growing up in this environment. Still, I’m grateful for teachers that recognize these challenges and work to prepare students not only to be ready for the future but to protect their own mental health.

The use of AI by medical students and residents has been a hot topic for my colleagues who are working in academic settings. There are concerns that students have become used to looking up facts and aren’t memorizing information the way they used to, which places them at risk when resources aren’t readily available. Whether it’s a downtime event or a rapidly evolving clinical situation, I know I’m glad that I have certain pathways memorized to the point where they just happen naturally in my thought process.

Of course, I’ve allowed some things to go by the wayside and I would have to look them up if I ever needed them. (Cockcroft-Gault equation, I salute you.) One faculty member said his school is using AI within its case-based learning modules for medical students in hopes that the approach will build diagnostic reasoning skills rather than sabotage their development.

The faculty physicians I spoke with had different thoughts about the use of AI by resident physicians, since they’ve graduated from medical school and have the MD or DO behind their name and are therefore able to treat patients with some degree of independence even if they may not be fully licensed. Universally, they had concerns about using non-medical AI solutions due to the risk of hallucinations and the safety risks to patients. They were also concerned about students using those resources to learn procedures and algorithms, since students wouldn’t be aware if what they were reading was incorrect compared to what they might learn reading a more authoritative resource such as a medical textbook or journal articles.

All but one said they conduct their teaching rounds in an AI-free environment where participants are expected to contribute to the discussion without the benefit of external resources.

That conversation was limited to faculty in my immediate area. I suspect that attitudes might be different in parts of the country that are more apt to adopt new technologies more aggressively. I would be interested to hear from informaticists that work with medical schools or graduate medical education programs on how your institutions are approaching AI and what best practices are being developed.

Is AI really going to make healthcare better, or is it another shiny object that will eventually lose our admiration? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 6/12/25

June 12, 2025 Dr. Jayne Comments Off on EPtalk by Dr. Jayne 6/12/25

From Boomer Sooner: “Re: Stanford’s EHR summary tool. The Department of Defense also recently launched an AI summary tool to help with the review of applicant records.” I know a thing or two about the process that military applicants go through, especially those who are applying to the military service academies or are going through the selection processes for highly selective fields. The onus of trying to get all the records to the right place is on the applicant, and it can be tricky when a practice doesn’t release records quickly. One of my favorite candidates said that in that process, the applicants who were military dependents had a bit of an advantage because their records were more easily accessible by reviewers.

The new tool, which was developed by the Innovation Facilitation Team at the US Military Entrance Processing Command (USMEPCOM), creates AI-enabled summaries of medical documents, reducing the time required for provider review. The summary can be seen in the MHS Genesis system as an encounter summary.

A flag with a star

AI-generated content may be incorrect.

I was excited to learn about a recently enacted Arizona law that is aimed at protecting physicians and patients from unintended consequences that are related to AI. House Bill 2175 is designed to keep health insurance companies from using AI as the ultimate decision maker as they review claims and deal with medical necessity appeals and denials. It also applies to prior authorization requests and recognizes that cases that require medical judgment should be reviewed by licensed medical professionals with the appropriate training, experience, and ethical responsibility that is needed for clinical decision making. The law was introduced with the support of the Arizona Medical Association and various care delivery organizations and advocacy groups and goes into effect in 2026.

Nebraska is also addressing hot button healthcare issues with the Ensuring Transparency in Prior Authorization Act, which requires insurers to make their prior authorization requirements visible on their websites. Similar to the Arizona law, it prevents AI from being the sole basis for a denial of coverage. It also requires a 60-day notice period before payers can add new requirements. We often think about healthcare IT in terms of provider side organizations, but plenty of tech folks are working on the payer side. It will be interesting to see how much work is done on websites and how quickly it happens. I’m betting that payers drag it out until the last minute, knowing that it doesn’t go into effect until January 2026.

One more state wading into the healthcare fray is Indiana, which recently enacted a bill that requires non-profit hospitals to either lower their prices or lose their tax advantaged status by 2029. Hospitals will be required to submit audited financial statements that show a decrease in their prices to match or be less than the statewide average. Failure to submit the audited statements can result in a $10,000 per day penalty. The bill has other interesting features, namely creating a state directed payment program for hospitals as well as a managed care assessment fee. A provision requires insurers and health maintenance organizations to submit specified data to the all-payer claims database and another one to reduce drug costs for the state employee health plan.

image

I wasn’t aware of Guidehealth until the company announced this week that it had received a $10 million investment from Emory Healthcare. As one would expect, the solution has an AI-enabled component. It advertises “AI-driven intelligence with human-centered care” using medical assistants that are “trained in data science and empathy.” They are branded with the trademarked Healthguides moniker. The company plans to use the additional investment to add AI-powered virtual care navigation to support analysis of patient-reported data and with interventions that target fall risk or depression screenings.

Guidehealth was already working with Emory’s Population Health Collaborative to boost quality scores under a Medicare Advantage contract. I would be interested to understand the medical assistant training and whether unique hiring algorithms are being used to find individuals with a particular level of empathy. In my experience, that’s not only hard to find at times, but difficult to enhance with training.

Speaking of AI, over the last year a couple of articles looked at AI-generated messages to patients and found that those with an AI origin were more empathetic. A new study that looked at medical queries across the US and Australia found the opposite. The AI-enabled responses were more accurate and professional than human responses, but lacked emotional depth and also raised concerns of data bias. I’m sure we’re not done with this one, and many more research efforts will be looking at the phenomenon.

While many organizations are looking at technology solutions to close gaps in care, particularly in preventive services, a recent study showed that for cervical cancer screening, lower tech interventions can still drive the needle. Researchers looked at patients in a safety net care setting and compared rates of cervical cancer screening. Patients who received a mailed self-collection kit along with a telephone reminder had greater participation (41%) than those who received a telephone reminder alone (17%). It just goes to show that nudges aren’t enough. We need to make it easy for patients to get the recommended services rather than just telling them they need to do it.

From Weird Al: “Re: earwax as the newest precision medicine tool I wonder how much these tests will cost?” A BBC article notes that wax could contain biomarkers for cancer, metabolic disorders, and even Alzheimer’s disease. Since ear wax is relatively stable, it might be able to show longer-term trends with various chemicals. There’s a team at Hospital Amaral Carvalho in Sao Paulo that is looking at cerumen for cancer diagnosis and monitoring, and several other institutions are conducting research.

Having spent many long hours in the emergency department and urgent care centers, I feel like worked with more than my share of ear wax. Running tests on it isn’t as cool as diagnosing conditions using a Star Trek-style tricorder, but here’s to the next generation of research and seeing if we can develop tests that are not only less invasive, but cost effective.

What healthcare technology advancements do you feel have really changed how we approach patients or conditions? Are they glamorously high tech or startlingly low key? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 6/9/25

June 9, 2025 Dr. Jayne 2 Comments

People often ask me about the kinds of things that excite me within healthcare IT. I have to admit that despite the amount of money that has poured into the industry over the last few years, I don’t run across things that I think are cool as often as I would like.

Although I’m enthusiastic about new developments, a lot of companies appear to be trying to jump on a bandwagon. Plenty are hawking solutions in search of a problem, while ignoring the real problems that clinicians face each and every day.

I was glad to see that Stanford Medicine is going after a solution that could be a game changer for clinicians. Their new ChatEHR platform is getting a lot of buzz, and rightfully so. The ability to effectively query the medical record and find information quickly would create a tremendous advantage for clinicians.

Back in the days of paper charts, we thought a hospital stay was complicated if the patient’s visit documentation expanded into a second chart. Sometimes patients who had been there for a while even had a third or fourth chart. I cared for quite a few patients who were long-term residents of the inpatient units. I once dictated a discharge summary for a pediatric patient who had been hospitalized for 18 months. I was extremely grateful to the different residents who had created transition summaries whenever one of them rotated off that particular medical service. It allowed me to draw the overall summary from those interim summaries rather than having to dig through 550+ days of documentation.

It should also be mentioned that good or bad, hospital notes were shorter in those days. Although an admission History and Physical or a Discharge Summary might have been a couple of pages, the average daily note was a couple of inches long on the page and included much less regurgitated information than notes do today. Sometimes they were borderline illegible, which I agree is a patient safety risk, but they cut to the chase.

I always enjoyed the notes of a particular infectious disease consultant who wrote his notes in bullet format and put the truly important items in all caps. Now, even a simple daily progress note can be several pages long. It feels increasingly difficult to find the information that’s important.

EHR vendors have tried to combat this by creating various summary screens, tables, dashboards, and other elements. Although some of them are truly awesome (hip, hip, hooray for graphing and trending of lab values and vital signs data) they don’t do well at capturing narrative information that is still frequently found in providers’ notes. Often it’s the narrative comments that really tell the story of what is going on with the patient. This is where using AI to better harness that information can deliver real value.

When I read the initial description of the Stanford tool, it reminded me of working with a human scribe in the emergency department. Our scribes were phenomenal and did a great job of anticipating the attending physician’s questions and having the answer ready by digging through the different screens while we were talking with the patient. Their ability to multitask was much appreciated, although not every scribe is that proficient. Many physicians don’t have scribes, so their thought processes were fragmented while they’re trying to simultaneously hunt for information and also talk to the patient, their family, and the care team. Stanford leadership called out the importance of having this functionality in the clinician’s workflow.

It should be noted that several EHR vendors have been working on this, but there are some limitations to a vendor-driven approach, at least in my experience.

I’ve worked with more than a dozen EHRs over the years, and many different instances of the same two or three EHRs. Despite the idea of vendor-driven standardization, when you’ve seen one installation of a big EHR, you’ve seen one installation of a big EHR. Unless the vendor is strict about preventing customization, care delivery organizations have been known to customize themselves into a corner in the name of trying to enable their own unique workflows.

With the health system driving the AI search and summary efforts, not only can those local customizations be addressed, but it would also seem easier to incorporate source material from other systems. That could be a different EHR, legacy records, HIE information, or state registry information.

The Stanford team has been working on their solution since 2023, so it’s not something that an organization can just throw together overnight at this point. The model has limited use, with just over 30 clinicians at Stanford Hospital working with it and providing feedback on its performance and usability. Their goal is to roll it out to other clinicians at the facility as well as those at other facilities within the larger organization. It will be interesting to see how that timing looks and how quickly they can have more distributed utilization.

The team is also developing automated tasks within the tool, including one that looks at the records of potential transfer patients to determine whether they can be received and others that could help evaluate patients for hospice placement.

As I was reading about the solution, I assumed that it would have metadata or citations to identify the origin of the data in the summaries. It sounds like that is a feature on the “coming soon” list, but I personally think that’s an essential piece that is needed to gain clinicians’ trust. I know plenty of physicians that don’t trust their support staff to take a patient’s blood pressure properly, which results in the clinician rechecking it on every patient, so doing the change management tasks that are needed to create buy-in from end users will be important.

Seeing expensive solutions in place that clinicians don’t use is one of the most frustrating things I saw regularly as a healthcare IT consultant, but I know that the “AI” label will create a lot of clinician interest right off the bat regardless of how robust the solution might be.

I’d be interested in hearing from other organizations who might be working on similar projects, or from EHR vendors that are also trying to make this happen. What information is the easiest to access, and what ended up being more challenging than you think? How are clinicians receiving the solution, and what kinds of enhancements are they asking for right away? If you’re a clinician, I’d be interested in your thoughts on this kind of tool and what you would need to feel that it was reliable. As always, leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 6/5/25

June 5, 2025 Dr. Jayne 1 Comment

As one might expect, the hot topic around the virtual physician lounge this week was the accuracy of the “Make America Health Again” report, which appeared to be at least partially co-authored by some unsupervised AI. The report exaggerated findings and cited studies that were nonexistent. As someone who has spent countless hours serving on the conference committees that review scientific findings and knowing how hard actual scientists and researchers work, it’s particularly offensive. The report doesn’t even have listed authors, which is telling. I wouldn’t accept bogus citations from the first-year undergraduates when I was a teaching assistant for “English 101: Thinking, Writing, and Research,” so it’s incredibly difficult for me to see this kind of thing happening among our nation’s leaders.

The physicians I work with are fed up with what they are seeing come out of CMS and the Department of Health and Human Services, knowing that not only do public health interventions such as vaccines or fluoride in the water supply reduce morbidity and mortality, but that they are also cost-effective. For those of us who have spent our lives in the pursuit of evidence-based treatments, improved outcomes, and careful spending on patient care, the cognitive dissonance we feel is profound when patients bring this pseudo-science into our exam rooms. We’re tired of being told that we’re “in the pockets of big pharma” and that we’re making money off of vaccines – neither of which is farther from the truth for the average primary care physician. It’s almost as bad as the cognitive dissonance we felt during COVID. If I were seeing patients full time right now, I doubt I’d be able to make it through a full day of clinic. I have the highest respect and gratitude for my clinical colleagues who do it every day and it makes me want to work harder to support them in whatever way the informatics team is able.

I had a chance to speak with some of my favorite EHR folks today, and we were talking about the challenges that hospitals and health systems will be facing in the next few years. Everyone seems to be struggling financially and looking for ways to reduce expenses because of the difficulty of forecasting income these days. It’s difficult to fine tune your assumptions if you don’t know whether there will be Medicare cuts, Medicaid cuts, or changes in how commercial insurance companies are paying due to changes with the first two. There are still concerns about telehealth payments, coverage for remote patient monitoring, and the ability to cover costs for other services that can drive the needle for health outcomes long term, such as weight management programs and preventive services.

Managing human health is a long game, with things that happen to us in childhood potentially driving health outcomes decades later. Since many of the players in our system are more concerned about generating profits for investors and shareholders on a quarter-to-quarter basis, it seems less common for organizations to consider taking less profit on a quarterly or annual basis in order to play a much longer multi-year game. There are also so many complex forces at play when you consider health outcomes, from access to high-quality food to the availability of preventive services. I remember back in the day when we struggled to care for patients with diabetes because Medicare wouldn’t pay for diabetic testing supplies until the disease reached a certain severity. Sure, test strips are expensive, but so are amputations and dialysis services, and it took years to get those coverage decisions amended.

I look at the IT budgets of some of the health systems I’ve worked with over the years, and on the surface they seem insanely high. However, when you look at the number of things we’re trying to do now using technology, it’s easier to understand. There are so many more technology workflows in the average patient’s care now – from online scheduling to contactless check-in to online bill pay and beyond. We have automated medication cabinets rather than candy stripers running medications down the hall on carts (which is actually how I started my clinical career, back in the day). We are trying to reach patients on so many more levels – from pre-visit education to in-visit technology support to post-encounter care, and we never did those things before. They all cost money, but technology isn’t necessarily to blame. They’d cost significantly more if we were trying to conduct those same workflows with humans.

I certainly don’t envy my CMIO and CIO friends who have been told to cut budgets across the board without regard to how those cuts are going to change outcomes and potentially impact patients. My generation of physicians is seeing quite a few early retirements and I fear that more will be on the way as organizations try to trim salaries by making it more difficult for physicians to make the same amount of money. I see plenty of hospitals that are being penny wise but pound foolish in how they are managing staffing models for nursing and ancillary services. I can’t help but think it is going to get a lot worse before it starts to get better. I’d be interested to hear how healthcare leaders are approaching those arbitrary budget cuts. I think I’d be tempted to bring out the Magic 8 Ball as I was assessing line items, just to bring a bit of levity to a difficult situation.

image

Mr. H shared this photo earlier in the week, and I cringed at the disingenuous caption. Anyone who thinks that caring for patients after a cyberattack is “caring through the unexpected” is delusional. Call me cynical, but we should all be expecting this, every single day. Hackers are getting more sophisticated every day and it’s only a matter of time before an organization gets hit. If you haven’t refreshed your downtime plans or had a drill recently, it’s time to do both.

image

I would be remiss if I didn’t say, “Happy Birthday, HIStalk!” since everyone’s favorite healthcare IT rumor mill began around this time in June 2003. Those were interesting times, when organizations were rolling out technology because they wanted to improve patient care, reduce physician documentation burdens, and save a lot of trees. Fast forward, and although we’ve met some of those goals, we’ve made others worse. Here’s to seeing what the next two decades of healthcare IT throws our way.

Do you have a favorite HIStalk memory? Leave a comment or email me.

Email Dr. Jayne.

Text Ads


RECENT COMMENTS

  1. I dont think anything will change until Dr Jayne and others take my approach of naming names, including how much…

  2. I love the community health center that serves as my medical home, but they regularly ask me to sign forms…

  3. My mom was admitted to the hospital from the ED after she was diagnosed with multiple pelvic fractures. Two different…

  4. Many medical practices have become assembly lines, prioritizing throughput instead of personalized attention. In this case, patients are the widgets…

  5. Typical Big Health System experience. But the fraudulent charting is quite something. The higher-ups would care if they found themselves…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.