Home » Dr. Jayne » Recent Articles:

EPtalk by Dr. Jayne 9/4/25

September 4, 2025 Dr. Jayne 4 Comments

In the spirit of “no good deed goes unpunished,” insurance giant Cigna Healthcare creates a new reimbursement policy that adds additional scrutiny for certain high-level evaluation and management codes, which could lead to those visits being downcoded.

We saw this type of review during the early days of EHR adoption, prior to Meaningful Use. Physicians began using the power of the EHR to more accurately document the work they had been doing, but perhaps not documenting as well as they could have. When practice management systems picked up on that additional documentation to suggest higher billing codes, there was a bit of backlash in some parts of the country. Fortunately, my health system had a detail-oriented coding and compliance department that was willing to go to the mat for our physicians, so we didn’t see much negative impact.

I wonder if this is partly being driven by increasingly detailed documentation that is being generated through ambient documentation systems. I am curious if organizations are changing internal revenue cycle management policies to get ready. Feel free to reach out if you’re doing something different to prepare for this or if you feel targeted.

With recent changes to federal vaccine recommendations, some professional and clinical organizations are coming out with their own guidelines, including the American Academy of Pediatrics, the American Academy of Family Physicians, and the American College of Obstetricians and Gynecologists.

It used to be easy to pick the guidelines that would be used to inform your EHR’s health maintenance and vaccine reminder features, but things just got a little trickier. I’m interested to learn if organizations will be incorporating these varied guidelines or instead will stick with the revised federal guidelines and leave physicians to shoulder the cognitive burden of remembering the other guidelines.

image

Sometimes I see headlines that don’t make sense. This one from CMS promotes its “Crushing Fraud Chili Cook-Off Competition.” I went to the linked website to see if it helped me make sense of it. I get the cook-off analogy (or bake-off, as some describe it), I don’t know why they doubled down on the “chili” aspect, which is also included in the challenge’s logo.

The competition is designed to identify ways to reduce labor-intensive processes. As someone who has cooked a lot of chili in her life I wouldn’t define it as a particularly challenging dish. I guess “steel cage match” didn’t resonate with the CMS folks, but it would draw more attention than a chili cook-off with no chili.

I’ve been in healthcare a long time, but somehow I missed out on this annual Most Beautiful Hospitals competition. The 2025 winners that were announced this week range from pediatric subspecialty to critical access hospitals. I’m sure people prefer to get their care in places that are aesthetically pleasing or provide a more healing and recuperative environment, but based on my last few care encounters, I would settle for one that has decent wayfinding and communication that go beyond the bare minimum.

From AI Troll: “Re: Taco Bell. It is using AI in its drive-throughs.” The piece details the issues the company has had in trying to implement AI-powered voice ordering. It has been used at 500 locations, and although some implementations have been successful, others have been challenged by people placing wildly inappropriate orders such as 18,000 cups of water.

I used to work at a healthcare facility that was next door to a Taco Bell. I saw many orders being placed by our paramedics and other support staff. The franchise couldn’t even get orders right with humans in the loop on both sides of the order, so I don’t have a lot of confidence that AI would be helpful there. I would personally rather order through an app than argue with interactive AI, but then again, I’m not the demographic that Taco Bell is likely looking for.

From Mascot Wannabe: “Re: health systems and stadium naming rights. Here’s a weird one.” People have spotted stickers around Chattanooga, TN that promote the naming of the new minor league baseball stadium after Erlanger Health. However, the health system denies being behind the stickers, which say, “We bought the best baseball stadium naming rights in Chattanooga” and feature an outdated Erlanger logo.

The health system’s CEO is quoted as saying that it’s “an investment that’s going to have a create return for Erlanger and the community,” but I haven’t seen anyone quantify the ROI of such deals. If you’re in the know, feel free to reach out anonymously.

Turning to a non-tech topic for a change, this BMJ Open article on physician attire caught my attention. The authors did a systematic review of patient perceptions of physician dress to see if it impacts the physician-patient relationship. They identified studies that were published from 2015 to 2025. They found that patient preferences varied based on specialty, clinical context, and physician gender.

Some studies have found that combining casual dress with white coats may signal approachability in primary care and ambulatory settings. Scrubs were favored for emergency and operative environments, where they signaled preparedness and professionalism. Male physicians were perceived as more professional when wearing formal attire with white coats, while female physicians in similar attire were often misidentified as nurses or assistants.

I recall a dustup in a large California-based integrated health system a while back. A new OB/GYN department policy specified that female physicians must wear “hosiery,” but had no similar recommendation for males. Administrators couldn’t justify the change since unspecified hosiery isn’t considered personal protective equipment. If they had a Victorian aversion to bare ankles, it would have made more sense to require coverage with clearer language. Physicians responded by wearing silly socks to prove a point, and the policy quickly vanished.

What do you think defines professional attire? Should physicians consider ditching the white coat or keeping it for historical value? Leave a comment or email me.

Email Dr. Jayne.


EPtalk by Dr. Jayne 8/28/25

August 28, 2025 Dr. Jayne 1 Comment

Researchers from Indiana University have created an algorithm that helps clinicians search through patient data from health information exchanges and other sources. The tool identifies the most relevant data for a given visit such as in the ED, where surfacing key information quickly can impact treatment decisions.

It also suggests next search terms based on those used by other clinicians, similar to what we’re used to on retail and commercial platforms. The team has earned two patents for its work.

Public health informatics is a key domain that must be mastered to obtain board certification in clinical informatics. I hadn’t done much work in that area when I prepared for my board exam, but I found it to be fascinating. It’s also challenging due to limited US public health funding and the need to work across disparate systems — state registries, public health center clinics, disease surveillance platforms, and environmental data sources.

I’d like to give a shout-out to the public health informatics teams in Mississippi that provided the data that led state health officials to announce a public health emergency from rising infant mortality rates. That declaration lets the state mobilize resources it otherwise couldn’t.

Mississippi has previously been on watch lists for its high numbers of preterm births. It also a “maternity-care desert,” with wide regions lacking hospitals that offer obstetric care. 

Informatics will underpin many of the proposed solutions, such as improving standardization of care, expediting transfers to different levels of care, monitoring prenatal care opportunities, expanding home visit programs, addressing gaps in maternal care, and improving patient education and engagement around safe sleep practices. If you’re working on any of these healthcare IT projects in Mississippi, we’d love to hear from you.

image

Speaking of love, props to one of my favorite PR people, Grace Vinton, for channeling her inner Swiftie into healthcare advocacy with a series of reflections on what has become the social media story of the week. I was excited to see a healthcare tie-in so that HIStalk wouldn’t be the only media outlet that didn’t do at least some kind of coverage.

Other captions included: “When prior auth says immediately yes;” “When there’s a telehealth option; “When there’s a patient access quality measure;” and “When the war for patients to get full access to their own data is finally won”. I never thought I would see the day when I would add “Swiftie” to my Microsoft Word dictionary, but here we are.

image

Mr. H called this recent sponsorship announcement to my attention last week. I’m always leery of hospitals that spend their money on stadium-naming rights or on partnerships that seem nebulous. This one seems to be more than just name recognition, with a Mount Sinai Health System web page detailing the ways they’ll be supporting the event.

There will be a booth for player meet-and-greets, a Children’s Sports Zone for family activities, and a broad swath of Mount Sinai physicians on standby, representing specialties including orthopedic surgery, emergency medicine, sports medicine, anesthesiology, psychiatry, radiology, and urology. There are also some health and wellness videos including one on “how to prepare for a day at the US Open” and another one on “heart health and tennis.” Kudos to the health system for turning this into more than a name-on-the-wall moment. 

From Lost in the Archives: “Re: medical records requests. My hospital is being absolutely crushed by requests dating back decades, since the Radiation Exposure Compensation Act (RECA) was extended to cover hazardous exposures in St. Louis. The Department of Justice is requiring that hospitals certify all the medical records for patients to receive cancer-related compensation. Most of the records being requested have already been purged. This is a nightmare for patients and our skeleton crew in medical records.” I did a little digging to find that the legislation adds eligibility for residents in 21 ZIP codes in and around the St. Louis metropolitan area that were contaminated with uranium waste after processing that was related to Cold War efforts. The compensation program, which is administered by the Department of Justice, previously covered certain cancers for patients who lived in New Mexico and other areas that were affected by release of radiation during atmospheric nuclear tests.

I cold-called one of the academic medical centers in the area. They are putting together their own guidance for patients since the phone number for the program doesn’t work. The rep I spoke to declined to be identified, but said that the stories are “heartbreaking” and patients “just start sobbing” when told that their records have been purged. She mentioned that they are directing patients to the Missouri Cancer Registry, which started gathering data in the 1980s. I’d be interested to hear from anyone who is working there to understand how they’re managing the request volume.

image

OSF Healthcare is using virtual care solutions at some of its facilities in an effort to reduce emergency department wait times. Patients are screened to ensure that they are appropriate candidates for virtual services. Those who opt in receive their care in a dedicated virtual exam room. Patients can be examined by the virtual physician using electronic stethoscopes, otoscopes, and ophthalmoscope technology as well as standard audio and video tools.

As someone who has worked in various emergency settings with a wide range of acuity levels, it makes sense to have lower-acuity patients seen virtually if doing so helps the overall staffing model while providing the same quality of care.

People often don’t realize that a fair amount of the care that goes on in the emergency department these days is really primary care. Hospitals have been caring for these patients in fast-track units for years. Unfortunately, even those units get saturated.

During the years I worked fast-track, I was usually the only physician on the unit. Patient care could have been so much more efficient if we’d had another 0.3 or 0.5 FTE physician working, but staffing half a human is hard to do. These virtual approaches allow that additional human to provide staffing to two or more facilities, which makes it more cost effective.

Have you ever had a virtual visit in the ED? Would you object if it were offered? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 8/25/25

August 25, 2025 Dr. Jayne 1 Comment

Like many  practicing physicians, I use a variety of tools to research clinical questions. This might be for patients I’m seeing or for board certification questions (which thankfully allow the use of references now).

I received an email from OpenEvidence the other day that announced “a new feature purpose-built for the patient visit” to deliver real-time evidence, help draft your clinical notes, and connect with patient context. It went on to say that the tool can act like a digital assistant and add medical intelligence into notes and other documentation by “automatically surfacing the latest clinical evidence and guidelines directly within your documentation workflow.”

As one would predict, my clinical informaticist sense was tingling. I had to go check it out.

What I found was a potential compliance nightmare. I hope practice leaders are aware of the potential risks and are educating their physicians accordingly.  I’ve spent enough time as a physician executive to know that many frontline physicians aren’t aware of compliance issues beyond what they see in annual HIPAA and Fraud, Waste, and Abuse training. Those only touch the surface of all things compliance.

Upon clicking the new visit button in OpenEvidence, I got a pop-p that said that the visits feature “can record patient encounters” and that it requires a “free BAA between your practice and OpenEvidence.” It asked me to input the name of my practice and then told me to “Contact your CMIO” to have my organization establish a BAA, even going as far as providing me a draft message to cut and paste to my CMIO.

If I sent that email to my CMIO, or anyone empowered to manage Business Associate Agreements on behalf of my clinical employer, I’d be laughed right out the door, especially since the preformed letter had the name of the practice wrong.

It also provided the option to say that I am in solo practice rather than with a corporate entity, which is also true for me, since I’ve maintained a legal entity over the years that would enable that should I want to use it. It gave me a one-click option to sign a five-page BAA, but you can bet that I’m not going to be doing that anytime soon.

I’m always skeptical when a service is free because I know money is being made one way or another behind the scenes. Unfortunately, that doesn’t keep people from just clicking and thinking that they’re good to go without fully understanding what is happening with their data.

Once I left that pop-up, I was greeted by a stealthy little pop-up below the search bar that again gave me a one-click option to accept the BAA. Based on how it looked, I can imagine that physicians might just accept it without fully understanding what they’re agreeing to in that innocuous little pop-up.

The experience made me think of other free services that may run the risk of needing a Business Associate Agreement, including Doximity. Plenty of physicians have signed up to use its free services, which include Fax and Dialer. The latter lets physicians call patients without revealing the physician’s contact information. It also allows physicians to send secure texts. 

Video testimonials on its website talk about physicians using it to share lab results or other important communications. I hadn’t thought about using that service, but it made me wonder how much physicians are really thinking about it and how they’re documenting these communications in the medical record without there being integration. It made me wonder about the potential liability risks of these services and if physicians are sacrificing accurate documentation for convenience.

Doximity also offers a GPT feature. I tried it a couple of months ago and didn’t think it was that great, so I decided to give it another go.

I asked it one of my favorite dermatology-themed board questions and found it to be utterly unhelpful, giving an answer that essentially said, “it depends.” That certainly wouldn’t be good enough to get me credit for my board certification question block, which had a very specific answer in mind. Fortunately, I had previously used a stronger reference to help manage that question, and I’m grateful that I went with that strategy rather than relying on this one.

I asked a question about electrolytes in a specific medical condition and got a much more satisfying answer, with the response nicely calling out some important details specific to the clinical scenario. Other AI tools I’ve used haven’t done that well with that particular scenario. I still wonder what the company might be doing with my data and my search history.

I don’t remember what was in the Doximity terms and conditions when I signed up. I did it many years ago for a free fax number so I could submit expense reports during a particularly annoying consulting engagement where they wouldn’t accept them in PDF format.

They were easy to find via a link located at the bottom of the screen. They were 23 pages long, so I just skimmed through them looking for interesting tidbits. One was a clause that the user agrees not to use the tools “in any way that violates or conflicts with any agreement to which you are a party, including any agreement with your employer.”

I’ve been involved in enough physician online forums to know that a good number of physicians have no idea of some of the key details in their employment agreements, such as the number of days of notice they have to provide if they’re quitting, or how their bonuses are calculated. I would be surprised if the majority of physicians know the details of clauses that might be lurking in those agreements with respect to tools such as these.

One of my favorite sentences in the agreement: “We do not guarantee the accuracy or reliability of this content and information.” That’s certainly something right there.

The agreement also clearly says that the AI tools are “for informational purposes only” and shouldn’t be “used as clinical decision support tools or for diagnosing, preventing, or treating any medical condition.”

The agreement also linked out to the company’s privacy policy, which clearly states that the company may use de-identified data and share it with third parties for purposes that include to “support commercial opportunities, generate insights and identify trends, and promote our business.” I’m no lawyer, but I’m guessing the part about commercial opportunities allows them to sell that de-identified data for whatever purpose they see fit.

Additionally, they’re clear about how they work with “commercial clients” to target physicians. Although I’m not crazy that the platform enables marketing, it’s not like they’re hiding what they do.

I got tired of reading about two-thirds of the way through, especially since I have a pile of better things to read sitting on my nightstand and at least one novel was actively calling my name.

I’m sure that various company terms and conditions contain other interesting examples. I would be interested to hear from users on some of their favorite or least-favorite clauses.

What do you think about free services that are monetizing your information? Is everyone so used to it by now that no one cares anymore? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 8/21/25

August 21, 2025 Dr. Jayne 1 Comment

image 

The big story of the week was the Epic User Group meeting, which sported a sci-fi theme this year. The four-day event started with the traditional welcome picnic on Sunday, with Advisory Councils and Forums on Monday.

Tuesday’s Executive Address, as one might have predicted, was full of Star Trek-style costuming. Judy Faulkner looked like she would be right at home in the Ten Forward lounge in the “Next Generation” series, sporting a lavender wig, neon glasses that coordinated with her shoes, and a sparkly vest paired with silver lamé pants.

Her Executive Address paid homage to problem list pioneer Larry Weed and included a summary of all the AI components that are already within Epic as well as the 160+ AI-powered components that are under development. She mentioned a focus on trying to keep software costs reasonable for health systems. My own health system spent a quarter of a billion dollars implementing Epic, so everyone’s mileage may vary on that definition of “reasonable.”

My favorite quote of the presentation was “Poor training leads to unhappy physicians.” I wholeheartedly agree. I’ve worked in organizations with vastly different training strategies and have seen the difference that good training makes.

The Epic team also emphasized that personalization is important in the EHR. Despite that advice, I still see organizations that try to restrict the ability of users to configure the EHR to make it easier to use. The most common reason I hear is that personalization makes it more challenging to provide support, but I’ve seen enough installations of enough EHRs over the years that I’m not buying that.

Sessions continued into Wednesday and Thursday, but word on the street was that people’s energy was flagging after Tuesday’s Starlight Dinner. The event is a major production for the Epic employees who step out of their usual roles to support attendees and make them feel welcome. I always enjoy talking to some of the folks working the logistics and food service roles and learning what they do in their usual work since UGM is an “all hands on deck” experience and people often contribute in ways that are vastly different from their day-to-day. If you have pictures or comments about this year’s UGM, feel free to send them my way.

image

Apple announced  earlier this week that it would introduce a redesigned blood oxygen feature on certain Apple Watches starting immediately. The issue impacts watches that were sold in the US after the International Trade Commission enacted an import ban as part of a patent infringement allegation by medical device maker Masimo. There may be additional legal wrangling to come based on suits and countersuits, but for now, users can enjoy an additional element in their quests for the quantified self.

Industry watchers are still trying to figure out how telehealth will ultimately fit into the healthcare delivery systems of the future, at least until another pandemic appears. Hims & Hers Health shares dropped last week following publication of details related to a Federal Trade Commission investigation. Consumers have long complained that the company makes it hard to cancel subscriptions and that some of its marketing practices push the limits of what is legal.

Regardless of legality, many of my primary care colleagues find their marketing to be a bit grating, with phrases such as “telehealth for a healthy, handsome you” and a focus on so-called lifestyle medicine that leads to high numbers of subscription-based prescriptions with nary a mention of coordinated or chronic care on the company website’s About page. Their care model is largely asynchronous, which means they don’t perform a physical exam that is certainly indicated for some of the conditions they treat.

I ran across another article this week that looked at a potential growth area for telehealth: caring for patients who are afraid of immigration enforcement actions at healthcare facilities. A physician who was interviewed for the piece notes an increase of patients who require emergency department-level care because their families are avoiding office visits.

The piece also quotes a policy analyst who notes that this phenomenon is happening across the country in the community health center space. The National Association of Community Health Centers is hosting its Community Health Conference & Expo this week in Chicago, and I anticipate this might be a hot topic in that forum. If you work in a community health center and want to share your thoughts, feel free to reach out.

From Left My Heart in San Francisco: “Re: Providence. Did you see this article about their accusing Kaiser of shorting them on payments? I would love to see these two square off in a steel cage match.” Kaiser Foundation Health Plan Inc. is accused of underpayment, but the payer responded that the hospitals are “seeking payments above fair and reasonable levels.” This occurs when the facilities treat patients in situations where price agreements are not in place. Kaiser argued in court documents that Providence is trying to group claims from disparate facilities across broad geographies, with variable economic elements at play. Kaiser is advocating for resolution through a federal program created by the No Surprises Act in 2021, but it’s no surprise that Providence wants to have its day in court.

OpenEvidence reported that its AI model has scored 100% on the US Medical License Exam (USMLE) and has achieved “super high-grade medical reasoning.” The company is offering a free explanation document that is targeted to medical students. I didn’t find the document terribly interesting. It looked a lot like the test prep books that I used to study for my own trip down USMLE lane back in the day. That’s not entirely surprising since the company’s founder previously worked for the Kaplan test preparation company.

The company offers a free AI-powered search platform to US clinicians that is made possible by its advertising relationships. I’m not super keen on having my eyeballs monetized, but will be watching to see what moves the company makes next.

I’ve been an anonymous blogger for more than a decade. As Mr. H has said, what we do is a fairly solitary pursuit. Most people in my “real life” have zero interest in healthcare IT, although I do have one ride-or-die friend outside the industry who reads regularly and gives me feedback, which is always a gift.

I’ve been asked in the past whether I’d ever want to drop the cloak of anonymity and join the ranks of medical influencers. I’m glad that I have no delusions of being TikTok or Instagram famous. I can barely remember to take my daily multivitamin, let alone be mindful of the need to constantly generate content to solicit likes. Without my trusty Outlook calendar appointments, I would probably not stay on track to send my posts to Mr. H each week. I will leave the medfluencing to the next generation.

Who is your favorite physician influencer and why? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 8/18/25

August 18, 2025 Dr. Jayne 2 Comments

As a clinician, I often have difficult conversations with parents about how to reduce the amount of time that their children spend using screen-based devices every day. Many of the parents I encounter are unwilling to limit their children’s screen time because of a perception that children who don’t have devices will be “left behind” or potentially ostracized by their peers.

I see a fair number of folks who use devices to entertain their children rather than interacting with them, which I find sad. When I walk into an exam room and see a kid poking away at a tablet while their parent sits in a heads-down position with their own phone, it makes me wonder what happens when they are not at the physician office. Ultimately, kids become dependent on devices for interaction and this can be a problem when they reach school age, when teachers spend a good chunk of time policing phone-related behavior.

As of the start of this school year, more than half of US states have passed legislation or created policies regarding the use of cell phones in K-12 classrooms. These range from requirements that school districts create guidelines of their own to outright bans. Among the reasons for such bans, lawmakers cite the need to create a distraction-free learning environment, a desire to curtain social media use, and a hope that such strategies will have a positive influence on youth mental health.

My own local district had a well-researched plan that had been created after stakeholder listening sessions with students, parents, and teachers. It was pre-empted by a maneuver at the state level that is significantly stricter. When my district was creating its policy, it used its health advisory committee to comment on the potential risks and benefits of restricting cell phone use.

Physicians raised the issue of the use of cell phones for medical reasons, including students and faculty who use apps to manage health conditions like diabetes. It’s clear from looking at some of the state laws that these kinds of needs might not have been considered by legislators. Needless to say, people aren’t happy about it, and I’m sure there will be some settling in once school starts.

With that in mind, I ran across this article that covers the topic from the youth point of view. Although it mentions the fact that devices have addictive properties, it also digs into the ways in which childhood in the US is changing. It reviews a Harris Poll survey of 500 children ages 8-12, with the majority saying they had smartphones and half of the older members of the cohort saying that social media use was common in their peer groups.

One of my favorite quotes from the piece states that, “This digital technology has given kids access to virtual worlds, where they’re allowed to roam far more freely than in the real one.” As a proud member of Gen X who had the stereotypical “come home when the streetlights come on” childhood, this resonated with me. The article notes that many children haven’t so much as gone down a grocery store aisle alone and that a good number aren’t able to play unsupervised in their own yards.

The authors note that children expressed a desire to socialize in person with minimal supervision, but due to restrictions by their parents, they instead use their phones to socialize unsupervised. Of course, there are reasons that parents have become more restrictive with their children, including fear of injury or abduction, but one of the statistics mentioned in the article is that “a child would have to be outside unsupervised for, on average, 750,000 years before being snatched by a stranger.”

It goes on to say: “Without real-world freedom, children don’t get the chance to develop competence, confidence, and the ability to solve everyday problems. Indeed, independence and unsupervised play are associated with positive mental-health outcomes.”

The authors mention the creation of parenting networks where kids are encouraged to get together for unsupervised play and community organizations that are promoting screen-free time. The deeper I got into the article, the more I wondered what tech companies think about these efforts and whether they feel that such advocacy for unstructured device-free play might ever be a threat to their respective bottom lines.

I’ve been a volunteer in youth-serving organizations for over 20 years, and I would say that any threat wouldn’t be a serious one. To get kids to put down their phones, we would likely need to see parents doing it first. On second thought, though, maybe if there was a TikTok influencer that started telling parents it was cool to let their kids run around the neighborhood and dig holes in the yard as some of us did once upon a time, we might see a change.

I recently read the book “Klara and the Sun” by Kazuo Ishiguro. It’s a complex novel told from the point of view of Klara, who is an Artificial Friend purchased to serve as a companion to a child with a chronic illness. I won’t throw out any spoilers as to the nature of that illness, but it was an interesting read.

There are already enough ways that technology is impacting childhood, so I hope we don’t get to the point where life starts imitating the novel. On the other hand, there are some scenes in the book where the main human character is allowed to go outside to play with only the supervision of the Artificial Friend. It made me think a bit that if parents won’t let their kids explore the world alone, maybe there just might be a role for technology.

It will be interesting to see if there is any research published in the next couple of years with respect to these cell phone limitations and bans and whether they do have a positive impact on youth mental health. It’s estimated that mental health is impacting the US economy to the tune of $282 billion annually, so we can’t afford not to study how these interventions play out.

What do you think about the role of government in limiting the use of technology for individuals, whether they’re children or adults? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 8/14/25

August 14, 2025 Dr. Jayne 1 Comment

image 

Perplexity made an unsolicited offer to buy the Chrome browser from Google for $34.5 billion. Several people I spoke with agree with the Axios statement that it’s a great marketing play, but unlikely to actually be accepted by Google.

I’ve seen friends and colleagues move away from Google in the months since it added its AI overview feature. I’ve been back and forth with it. I had three significant hallucinations in the same day recently, and all were related to simple fact-based searches that shouldn’t have been problematic. Perplexity claims to have financing in place for the deal, but we’ll likely never know who agreed to back it.

image

JAMA Network Open has become one of my go-to journals for relevant research that addresses hot topics in healthcare information technology, but at a level that is accessible to more frontline clinicians than might be found in a journal that was targeted towards clinical informaticists. An article this week addressed a great question: “Can a patient portal message with either a physician-created video or an infographic with a physician photograph increase end-of-season influenza vaccination rates?” The study was done at UCLA Health with 22,000 patients from 21 practices. Neither approach raised overall vaccination rates, but both methods increased immunization rates for children and the video message option scored slightly higher.

There’s a lot of vaccine hesitancy in the US, and the Health and Human Service secretary’s recent approval of influenza recommendations received little press coverage. Here’s to hoping that messages from trusted physicians can help drive the needle.

Another feature in the same issue looked at whether physicians made more edits to hospital course summary documents that were generated by large language models (LLM) compared to those generated by physicians. The study was small, looking at only 100 inpatient admissions to the general medical service. The authors found that the percentage of LLM-generated summaries that required edits was smaller than the percentage of physician-generated summaries. The studies were evaluated against a quality standard, with the authors concluding that since the LLM-generated documents needed fewer edits, they were of higher quality than those created by physicians.

I found the study design particularly interesting on this one. The hospital course summaries were randomly assigned to one of 10 internal medicine residents. They had three minutes to review each pair of summaries and edit them for quality purposes. The output of those editing steps was then reviewed and scored for quality by an attending hospitalist physician.

The authors controlled for document length by using a “percentage edited” score and also looked at how much the meaning of the original summary was altered. The authors noted that while the LLM-generated summaries required less editing and may have been “comparably or more complete, concise, and cohesive” they also “contained more confabulations.” They noted that the artificial time constraints may have influenced the result. The study overall supports the idea that using LLMs to help complete this task could be of value.

OpenAI has been trumpeting the release of its GPT-5 model, saying it does a better job with medical questions than its predecessor, but users have been clamoring for an option to return to the previous model. The majority of complaints are around system speed and increased errors. Others took issue with the fact that the new model was rolled out without notice, leading CEO Sam Altman to admit that “suddenly deprecating old models that users depended on in their workflows was a mistake.”

Those of us who have been in the healthcare IT trenches for years understand the value of adequate change management and communication strategies, so I was surprised to learn that the company thought it would be no big deal to just hot-swap the models. If they’re looking for a change management sensei, I might know a girl. Another great quote from Altman: “the autoswitcher broke and was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber.” Something to ponder for all the folks relying on these technologies. Sounds like they may need a testing advisor as well.

One of my favorite colleagues from residency was in town the other day, doing college visits with one of her children. Her family is going through additional challenges in the college hunt as they evaluate the medical and support resources available to help students manage chronic health conditions in their first few months away from their families. My friend is a brilliant physician who has worked in environments from academic to military to rural health, so she has seen it all.

One of her concerns was the sheer number of communications she receives from her child’s care team: “Seriously, I think I got 15 reminders and a survey, I don’t want to have this kind of a relationship. I already replied, so why are we still having this conversation?” She’s worried that when her child is on her own and receiving all those reminders and messages that they will cause anxiety, which is certainly valid.

Props to health organizations who allow patients to customize reminders and communications. I personally just need one reminder three days out and that’s all. My dentist sends a reminder at 10 days, seven days, three days, one day, and then hourly until you arrive. They claim they can’t adjust it. I’m not sure I’m buying that, but I’m not well versed in dental platforms.

image

Dr. Nick van Terheyden reached out to let me know that the Lown Institute is accepting nominations for its annual Shkreli Awards, named after notorious “pharma bro” Martin Shkreli. The awards are given “to perpetrators of the ten most egregious examples of profiteering and dysfunction in healthcare.” Previous winners have done such things as: selling the body parts of the deceased without notifying the next of kin, defrauding Medicare by submitting claims on behalf of patients who never received services, and bankrupting community hospitals while living a lavish lifestyle.

What’s the most egregious thing you’ve seen lately in healthcare, regardless of whether it’s worthy of a Shkreli award? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 8/11/25

August 11, 2025 Dr. Jayne 1 Comment

image 

I was intrigued by Mr. H’s mention last week of the Mass General Brigham FaceAge AI tool that can estimate age from facial photos. Researchers found that patients with cancer appeared older than their stated age. The older they looked, the lower their odds of survival.

Although physicians have historically used visual assessments to predict potential outcomes, the tool uses face feature extraction to estimate a user’s biological age based on their photo. An article describing the tool was recently published in The Lancet Digital Health if you’re interested in all the details.

This item, as many things that Mr. H mentions, got me thinking. I found a couple of sites that host biological age calculators and completed the relevant surveys to get a couple of results. Some of them were more specific, asking for various lab values. Fortunately, I had results for all of the requested lab values and even some of the exercise performance measures that were included on one of the questionnaires. I also found a tool that is very similar to FaceAge, although not the exact one used in the study, and snapped my selfie.

The survey-based calculators estimated my biological age as anywhere from 4.6 to nine years below my actual age. The facial photo tool thought that I was more than 10 years younger. I suppose my liberal use of sunscreen and hats is paying off, since my facial wrinkles were scored as 2 out of a possible 100 points. I also did well on the “undereye” measure, although I admit that my photo was taken when I was well rested. I’m sure it would not have scored as well had it been taken after a shift in the emergency department.

I don’t look at a lot of high-resolution pictures of my face, and when I received my score report with a full-screen of my face right in front of me, I was somewhat surprised that you can still see some artifacts from years of wearing an N95 mask while seeing patients. I’m guessing that when I look in the mirror my brain somewhat processes that out, so it was a little startling.

I’d be interested to see how I would score on a medical-grade tool such as the one mentioned in the article. Although it was a fun exercise to complete the different surveys and see where I stand, none of the recommendations provided alongside the results of any of the tools were different from what I usually hear during my primary care preventive visits: keep moving, eat as healthy as possible, and watch out for the rogue genes you’re carrying around.

I would be interested to hear others’ experiences with similar tools and whether they have motivated you to do anything different from a lifestyle perspective.

image

Mr. H also recently mentioned efforts by NASA and Google to develop a proof-of-concept AI-powered “Crew Medical Officer Digital Assistant” (CMO-DA) to support astronauts on long space missions. As a Star Trek devotee, I couldn’t help but think of the Emergency Medical Hologram from “Star Trek: Voyager.”

The project is using Google Cloud’s Vertex AI environment and has been used to run three scenarios: an ankle injury, flank pain, and ear pain. The TechCrunch article noted that “a trio of physicians, one being an astronaut, graded the assistant’s performance across the initial evaluation, history-taking, clinical reasoning, and treatment.” A particular astronaut/physician came to mind when I read that, and if there’s a hologram to be created, I’m sure other space fangirls out there would find him an acceptable model.

The reviewers found the model to have a 74% likelihood of correctness for the flank pain scenario, 80% for ear pain, and 88% for the ankle injury. I’m not sure what the numbers are like for human physicians in aggregate, but I’m fairly certain I’ve had a higher accuracy rate for those conditions since they’re common in the urgent and emergency care space. However, NASA notes that they hope to tune the model to be “situationally aware” for space-specific elements, including microgravity. I would hazard a guess that most physicians, except for those with aerospace certifications, don’t have a lot of knowledge on that or other extraterrestrial factors.

The article links out to a NASA slide deck. Since I do love a good NASA presentation I had to check it out. I was excited to see that there is a set of “NASA Trustworthy AI Principles” that address some key factors that are sometimes lacking in the systems I encounter. The principles address accountable management of AI systems, privacy, safety, and the importance of having humans in the loop to “monitor and guide machine learning processes.” They note that “AI system risk tradeoffs must be considered when determining benefit of use.” I see a lot of organizations choosing AI solutions just for the sake of “doing AI” and not really considering the impacts of those systems, so that one in particular resonated with me.

Another principle that resonated with this former bioethics student was that of beneficence, specifically that trustworthy AI should be inclusive, advance equity, and protect privacy while minimizing biases and supporting “the wellbeing of the environment and persons present and future.” Prevention of bias and discrimination, prevention of covert manipulation, and scientific rigor are also addressed in the principles as is the idea that there must be transparency in “design, development, deployment, and functioning, especially regarding personal data use.” I wish there were more organizations out there willing to adopt a set of AI principles like this, but given the commercial nature of most AI efforts, I can understand why these ideals might be pushed to the side.

In addition to the CMO-DA project, three other projects are in the works: a Clinical Finding Form (CliFF), Mission Control Central (MCC) Flight Surgeon Emergency Procedures, and a collaboration with UpToDate. I love a catchy acronym and “CliFF” certainly fits the bill.

I recently finished the novel ”Atmosphere” by Taylor Jenkins Reid . If you are curious about the emergency procedures that a mission control flight surgeon might need to have at their fingertips, the book does not disappoint.

The deck goes on to discuss the evolution of Large Language Models, retrieval-augmented generation, and prompt engineering within the context of the greater NASA project. The deck specifically notes that any solution must be on-premise, which is particularly true when you experience the communications blackouts that are inherent in space travel.

There are more details in the deck about the specific AI approach and the scenarios. I particularly enjoyed learning about “abdominal palpation in microgravity” and the need to make sure that the patient is secured to the examination table to prevent floating away. I also learned that “due to the microgravity environment, the patient’s abdominal contents may shift,” which got me wondering exactly how many organs were subject to shifting since many of them are fairly well-anchored by blood vessels and other not-so-stretchy structures.

The deck listed the three physician personas who scored the scenarios. Based on physician specialty, it’s likely that my favorite astronaut wasn’t one of them, but I was happy to see that an obstetrician / gynecologist was included.

Apparently there was a live demonstration of the CMO-DA at the meeting for which the presentation deck was created, so if anyone has connections at NASA, I know of at least one clinical informaticist that would love to see it. I’ll definitely be setting up some online alerts for some of these topics and following closely as the tools evolve.

Did you ever dream of being an astronaut, and what ultimately sidelined you from that career? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 8/7/25

August 7, 2025 Dr. Jayne 1 Comment

One of the hot topics around the virtual physician lounge this week was the opening of the Alice L. Walton School of Medicine in Bentonville, Arkansas. The school is named after its founder, who is an heir to the Walmart fortune.

The initial class of 48 students will be trained in a curriculum that is based on preventive care and a whole-health philosophy. The school is located on Walton family property and borders the Crystal Bridges Museum of American Art, which should provide an excellent diversion when students need time away from studying. Apparently the curriculum also includes a course that incorporates art appreciation as a way of encouraging observational skills and empathy.

Students are expected to perform community service as a way of better understanding those in their care. Other ways the curriculum differs from the standard include a focus on nutrition education, including cooking classes with teach-back sessions to patients, and time spent gardening and working on a teaching farm.

Tuition for the first five graduating classes will be covered by Mrs. Walton, who hopes that graduates will consider practicing in underserved areas. There are certainly some opportunities for service in Arkansas, which has some of the poorest health outcomes in the US.

The lure of free tuition is strong, but students are taking a bit of a gamble attending a school that does not yet have a track record for residency placements or a broad alumni network. Still, the school received over 2,000 applications for the class. Best wishes to these new students, and I look forward to seeing how the curriculum is implemented as the inaugural class progresses.

Another hot topic was a recent JAMA op-ed piece that is titled “When Patients Arrive With Answers.” It covers the evolution from patients arriving with newspaper clippings to bringing in printed results of internet searches and now arriving with AI-generated materials to discuss with their physicians.

One of my colleagues focused on a line in the piece about tools like ChatGPT: “Their confidence implies confidence.” This led to a discussion hallucinations that we have encountered using AI solutions, even in situations where simple fact-based questions are being posed. The author notes that they are now “explaining concepts like overdiagnosis, false positives, or other risks of unnecessary testing.” 

That comment resonated with my colleagues. One noted that she feels that AI is worsening the burnout problem in her primary care practice. She must regularly defend her recommendations against AI-generated suggestions, as well as misinformation that is being provided by TikTok influencers. The author recognizes this, and notes that explaining evidence-based recommendations in contrast with patient requests isn’t a new phenomenon and encourages physicians to “meet them with patience and curiosity.” Given the tight schedules that most physicians face, I’m not sure that’s realistic.

Keeping with the theme of AI, I enjoyed this JAMA Editor’s Note on “Can AI Improve the Cost-Effectiveness of 3D Total-Body Photography?” As someone who has had entirely too many skin biopsies, this immediately caught my attention.

The authors specifically address the idea of photography for patients who are at high risk for melanoma, citing a recent randomized clinical trial published in JAMA Dermatology. The study found that although the intervention resulted in more biopsies, it didn’t increase the number of melanomas that were identified.

Another study that was also published in JAMA Dermatology looked specifically at whether 3D total-body photography is cost-effective. It found that it wasn’t, but posed the idea that with AI enhancements, it could become more financially feasible. For patients who need regular monitoring, however, I guess we’ll just have to stick with “usual care.”

I used a non-medical AI tool this week to help address a question that a family friend posed. When you’re a primary care physician, everyone assumes you know about all facets of medicine. I’m constantly getting questions about radiology reports or lab results because people “don’t want to bother the doctor.” I still find it strange that they’d rather expose their protected health information to someone they don’t know well, who is merely the daughter of a friend, but that’s often how it goes.

I was curious what the patient would have seen had they decided to just use Google or any of the AI tools out there. In this case, both Google and Copilot did a great job explaining “what does pleural based opacity” mean, giving answers that were similar to my own.

The primary difference between the human answer and the AI generated one was in the follow up. Where I said that the patient should follow up with the ordering physician to understand what the term means in context of their clinical picture, both sources recommended further investigation, which most patients would interpret as needing additional testing.

I wasn’t as patient with another person who reached out for medical advice. Someone who I hadn’t seen since high school decided it was a great time to message me via Facebook and ask about various medications versus injections versus surgery for back pain. I have to admit that I took the easy way out by saying “so many factors play into the choice of treatments and it really depends on the patient,” which was as empathetic as I could get at the time.

A few days later, I plugged it into Google to see what it would provide. It did an exhaustive review of the different options and closed with this: “Important note: The choice of treatment depends on the specific nature and severity of the herniated disc, as well as individual patient factors and preferences. It’s crucial to consult with a doctor or pain specialist to determine the most appropriate course of action for your situation.” At least in this situation, I agree 100% with the Google. 

Are you a clinician who has to field medical questions from people who are not your patients? Have you considered outsourcing your advice to AI, especially if it’s outside of your typical scope of practice? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 8/4/25

August 4, 2025 Dr. Jayne Comments Off on Curbside Consult with Dr. Jayne 8/4/25

I recently had the opportunity to spend some time with a computer engineering student who was looking to learn about healthcare information technology. Specifically, he was curious about the role that clinicians play in the field.

We had some great conversations and the experience was very enjoyable, in large part because few of the discussions centered on AI. He has a particular interest in cybersecurity, so our initial conversations had some fairly deep coverage of the topic. He was interested in learning more about how hospitals and health systems handle the backup and recovery process, particularly when a security incident might have occurred. Based on a couple of his comments, I think I surprised him by being able to provide a deeper discussion of the topic than he expected to hear from a physician. 

It was a good opportunity to explain the field of clinical informatics and how many types of roles we fill. I’m unusual in how much experience I’ve had with infrastructure, architecture, and the nuts and bolts of interoperability. I’ve been fortunate to work with some great engineering and development teams throughout my career, picking up some interesting and unique knowledge along the way. I never thought I’d be able to have conversations about Citrix load balancing or be able to explain the role of transaction log shipping as part of a disaster recovery solution, but you never know where your career is going to take you.

In large part, I learned about those things not because I necessarily wanted to, but because I had to. The first EHR project I was involved in did not go well. A lot of IT folks were techsplaining, which didn’t help me solve the problems that were interfering with my ability to deliver high-quality care.

Although I think that many of them were just talking in their everyday language — similar to how physicians talk among themselves, without trying to leave me out of the conversation — I experienced more than one situation where an IT staff member was treating me in a way that was equivalent to patting me on the head and saying, “Don’t worry about this, little lady.”

After one of those encounters, I decided that I would need to hold my own, so I started doing a lot of reading. I figured if I could learn biochemistry and the complexities of the human nervous system I could certainly learn some of this new language and how all the technology was supposed to be working compared to how it was actually performing in the field.

Thinking about how information access has changed, learning about those domains would be a lot easier now than back in the days when only 5% of physicians were using electronic health records. You couldn’t just pop into your web browser and find articles about implementing systems in hospitals, because we were just getting started. Meaningful Use wasn’t yet a thing, and those of us that were trying to bring up systems were doing it because we thought we could revolutionize patient care, not because someone was making us do it.

Hospitals had electronic laboratory and monitoring systems and of course billing, but computerized order entry wasn’t even on the radar of physicians. Heck, we couldn’t even print patient labels from the computer system at one of my hospitals. They were still using Addressograph cards to add patient information to the paper used for writing daily progress notes.

We went down the internet rabbit hole as I was trying to explain that piece of equipment to my student. I wish I had a picture of the look on his face when I explained how a similar technology was once used to process credit cards at businesses. Apparently you can buy a vintage credit card imprinter machine via various online resale sites, for those of you who miss the very specific noise made when the charge card was pressed under the carbon paper.

That led to a good conversation around the idea that 40 years ago, we had no frame of reference for the technologies that we would be using today. No one would have guessed that we could simply tap our credit cards on a machine to pay, let alone load that credit card information into a palm-sized phone and use it to pay as well. I can’t even imagine how things will work in 40 years, and I hope that when he’s later in his career, he will have the experience of being able to share stories of how things used to be with someone who is just starting out.

We also had some interesting conversations about healthcare in general, and particularly around healthcare finance and how the revenue cycle works. In my opinion, it’s one of the messier aspects of the US healthcare system, and opportunities exist to make it better.

We had a good conversation around how claim adjudication works and why it’s rare in our area to see an organization that is doing real-time claims adjudication. Some of the practices that I go to don’t even collect your co-pay during the office visit, so I can’t imagine what a big shock it would be to use a system like that.

I also ended up teaching him how to read an Explanation of Benefits statement, which I think was an eye-opener, especially for someone who doesn’t have a lot of patient-side experience in his relatively brief adulthood.

I enjoyed learning about some of the non-healthcare work that the engineering student has done as he works towards his degree. Also, the supplemental activities that are available to students that didn’t exist when I was in school. His school has competitive rocketry, drone, and Mars rover teams where students can apply what they’re learning as early as the first semester. We had to wait until our junior year to really have experiential learning opportunities and they certainly weren’t as cool as any of those.

Although I tried to bring healthcare and healthcare technology to life, I’m not sure it’s going to be as cool as some of the other career options that will undoubtedly be available to him, especially if he’s leaning towards cybersecurity and cryptography. He’ll be back next week, and I plan to cover topics including robotics, prosthetics, and human-computer interaction. I might still be able to convince him that healthcare can be cool.

What do you think are the coolest technologies we’re using in healthcare, beyond AI? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 7/31/25

July 31, 2025 Dr. Jayne Comments Off on EPtalk by Dr. Jayne 7/31/25

image

There was some good discussion around the virtual physician lounge this week as one of my colleagues shared a recent article in Nature Scientific Reports about using AI to diagnose autism spectrum disorder and attention-deficit / hyperactivity disorder in children and adolescents.

Diagnosing these conditions can be challenging for primary care physicians who have limited time with patients and for parents who might wait months for their child to receive an appropriate assessment. In my city, the wait for a non-urgent assessment by a child and adolescent psychiatrist can be up to a year. Delayed diagnosis leads to delays in care.

The study still needs refinement, but preliminary results show that a sensor-based tool can suggest a diagnosis in under 15 minutes with up to 70% accuracy. The researchers began with a hypothesis that diagnostic clues can be identified in patients’ movements that are not perceptible to human observers, but can be detected by high-definition sensors. The authors catalogued movement among neurotypical subjects and those with neurodevelopmental disorders to inform a deep learning model. The movements were tracked by having the subjects wear a sensor-embedded glove while interacting with a target on a touch screen. The sensors collected movement variables such as pitch, yaw, and roll as well as linear acceleration and angular velocity.

I admit I was having flashbacks to some of my physics coursework as I read the paper, but it still kept my attention. The authors plan to continue validating the model in other settings, such as schools and clinics, and to validate it over time. The study has some limitations, namely its size. It had only 109 participants and some of those had to be excluded from the final analysis for reasons including inability to complete the exercise, motor disabilities, or problems with the sensors.

The participants were also a bit older than the typical age when diagnosis occurs, which could limit its broad applicability. Still, the ability to detect condition-related markers in an objective way, as opposed to having to use behavioral observations, would be a big step forward, especially if the study can be powered to significantly increase the sensitivity and specificity of the model.

image

Quite a bit of conversation occurred around a recent meta-analysis that looked at the number of steps adults should take in a day. Most of the patient-facing clinicians I know don’t have trouble getting their steps in on regular workdays, although some specialties have a fair amount of seated time, such as anesthesiology and pathology. A couple of folks I know are obsessed with getting a minimum of 10,000 steps each day, however, which is less important according to the recent article.

The authors looked at studies published since 2014 and concluded that individuals who got between 5,000 and 7,000 steps per day had a significant risk reduction for cardiovascular disease, dementia, and falls as well as all-cause mortality.

That’s not to say there’s a downside to getting 10,000 steps a day, but no clear evidence supports that specific number across the board. That’s good news for those of us on the IT side of the house who might spend less time ambulating than we’d like.

image

While we’re at it with our virtual Journal Club, another study that caught my eye this week looked at the benefits of the four-day work week. The authors looked at 141 companies that allowed employees to reduce workdays without a corresponding change in pay and found that the practice decreased employee fatigue, reduced burnout, increased job satisfaction, and improved efficiency compared to 12 control companies.

The process wasn’t as simple as just trimming days, however. Companies had to commit to some level of reorganization beforehand, focusing on efforts to build efficiency and collaboration prior to embarking on the six-month trial. There were 2,896 employees involved across companies in the US, UK, Australia, Canada, Ireland, and New Zealand.

I’ve worked with a couple of vendors who have instituted this practice. Their employees seem to be satisfied with the practice. I enjoyed living vicariously through the account reps who used their long weekends for camping and backpacking.

One of the companies sold a patient-facing technology with 24×7 support, so extra coordination was involved to ensure that those workers had adequate days off even though the rest of the company was closed on Fridays. I’ve also seen some healthcare organizations do this with their management teams, although it doesn’t seem that big of a stretch when the organizations already had hundreds of workers whose routine schedules involved three 12-hour shifts and leaders were already used to providing management coverage 24×7.

From Yes, Chef: “Re: this week’s Morbidity and Mortality Weekly Report. I would have loved to have been part of the public health informatics team crunching that data.” The report details an incident that involved a pizza restaurant not far from Madison, WI last October. Apparently 85 people experienced THC intoxication after eating from the restaurant, which shared kitchen space with a state-licensed vendor that produces THC edibles. When the pizza makers ran out of oil, they used some from the shared kitchen, unknowingly putting some “special sauce” into their dough. Public health informatics is one of my favorite subdisciplines of clinical informatics, so here’s a shout-out to all the disease detectives out there who solve mysteries like this one every day.

image

I’m trying to slow the volume of emails hitting my inbox, and HLTH seems to be one of the biggest offenders. The organization has been averaging three emails a day over the last month and attempting to manage my preferences hasn’t seemed to make a difference. Before clicking delete, I looked at the registration options for this year’s conference. It looks like it’s $2,995 and goes up to $4,100 next week.

I get that it’s an all-inclusive registration and includes two meals on most days, but it’s still a large amount to ask companies to spend on top of travel and lodging. For the average consulting CMIO, unless I can get some good meetings scheduled, the price isn’t worth it. Of course, media and influencers can apply to attend for free, but that’s hard to do when one is an anonymous blogger.

If you’re experiencing an overloaded inbox, who is the biggest offender? Have you found unsubscribing helpful or do you have other strategies to share? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 7/28/25

July 28, 2025 Dr. Jayne 5 Comments

image

Several people have asked for my opinion about Bee, which Amazon is acquiring. The company makes the Pioneer, a wearable that records and transcribes your day. It captures not only what you say, but also the conversations of those around you. It tries to entice users by providing summaries of the day, reminders, and other suggestions from within its companion app.

Unsurprisingly, the solution also requests permission to all of the user’s info, including email, contacts, location services, reminders, photos, and calendars in an attempt to create “insights” as well as a history of the user’s activities.

The device costs $50, which can be avoided by using the Apple Watch app, and then a $19 per month subscription on top of that. The solution uses a mix of large language models to operate, including ChatGPT and Gemini.

A quick visit to my favorite search engine pulled up a number of pages that mention the device. Some reports say that it isn’t able to differentiate between the wearer’s conversations and what they were watching on TV or listening to on the radio.

I wasn’t surprised at all to hear that significant privacy concerns have been expressed. The company keeps transcripts of user data, although it doesn’t store the audio. I laughed out loud when I read quote from an Amazon spokesperson who said that Amazon “cares deeply” about user privacy and plans to give users more control over how their data is used after acquiring the startup.

Along with anyone who has had to go through multiple levels of annoying menus (that seem to change regularly) while trying to rein in their Alexa device, I’m not buying it. Although Amazon claims to not sell customer data to third parties, they have plenty of uses for it in-house. Anyone who visits Amazon can see how their targeted marking winds up in different places.

Putting on my end user hat, I have to say this is one of the more ridiculous tools, offerings, or solutions that I’ve seen. However, there must be a huge number of people who disagree with me, because if it weren’t a potential moneymaker, I don’t think Amazon would be acquiring it.

What if the user is located in a two-party consent state and is now recording conversations without notifying the other parties? I found a funny video about the device, where Wall Street Journal reporter Joanna Stern said it “turns you into a walking wiretap.” She also asked the device to do an analysis of her use of swear words over the course of the month and shared her statistics in a funny recap.

The company’s website plays a pretty mean game of buzzword bingo. Examples: “turns your moments into meaning”and ”earns and grows with you” as it “sits quietly in the background, learning your patterns, preferences and relationships over time, building a deeper understanding of your world without demanding your attention.”

The website shows an example of a user and their team “discussing ideas for the next product release.” That’s right, you can wear it to the workplace and have it collect all the company’s intellectual property over the course of the business day. I’m betting that most company’s employee handbooks don’t have language that addresses this. If I were in the corporate compliance department of anywhere with employees, I’d be sending out a memo ASAP.

The website also gives examples of how the device and its app can dispense parenting advice and manage issues such as “dealing with resistance to potty training and handling emotional outbursts.” I’m sure that pediatricians and family physicians will be thrilled to review the device’s recommendations at well-child visits (sarcasm intended) along with everything else they need to cover.

The website also had the device’s terms and conditions, which were 10 printed pages long. Here are some of my favorite highlights:

  • By accessing the device, you agree that you have read, understood, and agree to be bound by all the terms, which can be unilaterally altered at any time and for any reason. The company will alert users simply by updating the “last updated” date on the terms page, and users “waive any right to receive specific notice of each such change” and accept the “responsibility to periodically review these Legal Terms to stay informed of the updates.”
  • Bee specifically calls out in the second paragraph that it offers no HIPAA protection.
  • The user accepts the responsibility to be compliant with any applicable laws or regulations and agrees to terms regarding the collection of data with respect to minors.
  • Users are prevented from disparaging the company or its services.
  • Users agree not to use information obtained “in order to harass, abuse, or harm another person.”
  • Users agree not to “harass, annoy, intimidate, or threaten any of our employees or agents engaged in providing any portion of the Services to you.” The use of the word “annoy” caught my attention, since I can’t imagine an employee engaged in customer service or support who doesn’t find at least some percentage of the users with whom they interact to be annoying.

I found some user comments on Reddit and the following phrases were some of my favorites:

  • I made the mistake of using the app to retrain my voice, and since then it doesn’t think I’m EVER talking, everything I say is recorded as “unknown”. So instead of thinking other people were me, now I’m not even me.
  • While the little convo summaries are often amusing, now I am trying to figure out how this thing is supposed to be useful.
  • Users accused the system of “trying to gaslight me.” Some of us get enough of that in our daily lives, so we don’t need an AI tool to contribute as well.

The website says the device is sold out, although the company is taking back orders and plans to ship new units by September. That means either their marketing team is trying to create some FOMO (fear of missing out) or that lots of people are ready to take the plunge, privacy be damned.

What do you think about the Bee Pioneer? Would you consider wearing one? Are you taking steps to specifically ban it and similar devices and applications from your workplace? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 7/24/25

July 24, 2025 Dr. Jayne Comments Off on EPtalk by Dr. Jayne 7/24/25

image

JAMA Network Open recently published an Original Investigation titled “Patient Care Technology Disruptions Associated With the CrowdStrike Outage.” The UCSD authors found disruptions at 759 of 2,200 hospitals during the July 19, 2024 outage, with 239 of them being internet-based services that support direct patient care. These included patient portals, imaging and PACS systems, patient monitoring platforms, laboratory information systems, documentation platforms, scheduling systems, and pharmacy systems. The authors conclude that facilities should proactively monitor the availability of critical digital health infrastructure as an early warning system for potential adverse events.

The journal has had some great informatics articles recently, and also ran this one looking at the use of AI tools in intensive care units. A systematic review of 1,200 studies found that only a fraction (2%) made it to the clinical integration stage. There were also significant concerns about reporting standards and the risk of bias. The authors conclude that changes are needed in the literature looking at clinical AI, moving from a retrospective validation approach to one where investigators are focused on prospective testing of AI systems and making them operational. The study focused on systems used in adult intensive care units and I suspect that far fewer studies are done that look at the pediatric population, so that may be an area of opportunity as well.

From Savannah Banana: “Re: stadium naming rights. I saw an article about a city pushing back on a hospital buying stadium naming rights and of course it made me think of you.” Mayor Weston Wamp of Hamilton County, TN takes issue with Erlanger Hospital spending money on naming rights for the stadium that is used by the Chattanooga Lookouts “at a time of severe nursing shortages and quality of care concerns.” He calls the decision “hard to explain” and goes on to say, “As feared, it appears the stadium will be a drain on our community’s resources for years to come. Before I was elected, the Lookouts convinced city leaders to give the team all revenue from naming rights on this publicly owned facility. Now, in a sad twist, our local safety net hospital will be footing the bill for the Lookouts $1 million annual lease payment.”

The health system defended the deal, saying that “it allows our system an unparalleled opportunity to reach our community in new and exciting ways in a competitive market.” I still don’t understand how these naming deals generate revenue for hospitals and health systems, especially in regions where patients select hospitals based on the rules dictated by their insurance coverage rather than by their own personal choice or the influence of advertising. If some of our readers have insight, feel free to educate me.

Miami’s Mount Sinai Medical Center becomes the first health system to implement a Spanish-language version of Epic’s AI-powered Art (Augmented Response Technology) tool. Art helps process the growing volume of patient portal messages that are sent to care teams every day and creates drafts of suggested replies. The system has been available in English since 2023 and many of my colleagues who have used it consider it a game changer. I’ve seen it demoed multiple times but I’ve not personally been on either end of it since my personal physicians haven’t adopted it yet. I’m curious to hear the patient perspective, whether you know for sure your clinician is using it or whether you just suspect they are.

image

People are talking about Doximity’s free GPT. I tried it once awhile back, but I can’t remember if I was impressed by it. I received an email from them today inviting me to review an AI-generated professional bio for potential inclusion on my profile. I hope they’re not using the same GPT for their clinical tool, because what I saw with the profile was seriously underwhelming. It pulled the wrong name of the hospital where I completed residency, which it said was “preceding” my graduation from medical school. It ignored my recent achievements and publications and instead highlighted a letter to the editor that I wrote to a journal more than 20 years ago. I clicked the “don’t add” button on the entire thing. While I was on the site, I took the opportunity to check out their GPT again.

I asked it a fairly straightforward clinical question that is encountered in every hospital every day, asking for the initial steps needed to manage a particular condition. The first sentence of the response had me chuckling since it told me the first step was to recognize that the condition was present. Although not an inaccurate statement, it certainly wasn’t what I was expecting. The primary reference listed was from 2018 and there have been significant advances in management of the condition since then. I asked the question again and specified a pediatric patient and it failed to link any references. Based on those factors, I can say that I’m officially underwhelmed.

image

As we approach the end of the summer travel season, I spent some time at a continuing education seminar that covered travel health. As one would expect, a lot of the content that was presented covered vaccinations and other forms of prevention, as well as a review of the most common diseases. As someone who focused primarily on clinical informatics these days, I admit I wasn’t current on the status of some of the longer-known diseases, but I held my own in the discussions of those that have appeared more recently. Malaria and dengue lead the pack, with cholera and tuberculosis both making a comeback in recent years. Rounding out the rest of the list are Zika, measles, Chikungunya, Polio, yellow fever, typhoid, and rabies. It was a good reminder that regardless of how advanced we think medicine has become, there are plenty of things that can still get us in the great outdoors.

Have you ever had a travel medicine consultation prior to a trip? Did you find it valuable? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 7/21/25

July 21, 2025 Dr. Jayne 1 Comment

Mr. H is running a poll that asks, “Is it ethical for doctors to prescribe the drugs of their pharma sponsors to people who seek specific treatments?” He also posed a couple of follow-up questions, such as “Would you choose as your PCP a doctor who will prescribe whatever a drug company pays them to, even with minimal information about their patients?” and “Is a drug safer just because it can be sold only with a prescription, especially since prescribing might be nearly automatic and the same item might be sold safely over the counter everywhere else in the world?”

I like the response choices that Mr. H included in the poll. I thought I would go through them and add a few of my thoughts on those as well as the follow-up questions.

No. The patient should see their regular doctor. As a primary care physician, I agree with this one in my heart. Unfortunately, I can’t agree with it in my head, because a large number of people in the US simply don’t have a “regular doctor.”

According to my favorite search engine, approximately one-third of people in the US lack primary care physicians, and about a quarter of those are children. Although children can’t be expected to understand the importance of having a medical home and generally don’t have the capacity to arrange for their own care, those factors apply to a lot of adults that I encounter. Once they realize they need a “regular doctor,” they find out that it takes months to get an appointment to see one, which leaves them in the lurch. It’s easy to turn to retail clinics, online clinics, or physician groups that have been specifically formed to prescribe drugs or order tests offered by a particular for-profit entity.

No, unless they review the patient’s medical records. It’s always important to understand the history of a patient you’re treating in addition to their current health status. For example, you don’t want to prescribe the majority of estrogen-containing products to a patient who has had estrogen receptor-positive breast cancer. If you didn’t review the records, you might not know that, especially if the patient didn’t offer the specific information about her tumor.

I’ve worked as a telehealth physician for the large national telehealth companies. Most of the time in those situations, you don’t have the patient’s records. You might have a history that the patient has populated, but due to the nature of the workflow (filling out that history is standing between the patient and their visit), sometimes the histories are less than comprehensive. Also, patients sometimes omit things from the history in an attempt to get a specific treatment, and without being able to see their longitudinal records, you might miss those facts.

No. It drives costs up for everyone. This response is currently scoring rather low, but it’s an important one. Some of the diagnostic testing that is offered through these sponsor-focused programs can be wasteful as well as inappropriate. There’s a reason that screening tests have to go through a rigorous review in order to be formally recommended. Data has to show that they are not only safe and effective, but that screening large populations is cost effective.

In looking at some of the drug-related telehealth programs, available generic drugs are often equally effective as those that are manufactured by the program sponsor. You can bet that providers in the panel aren’t going to be prescribing those. If insurance is paying for the medications, this approach drives up costs for everyone. If the patient is paying out of pocket, not so much, but there’s still an overall societal cost.

No. It’s a prescriber lawsuit waiting to happen. I’m a little on the fence about this one. There’s a difference between outright malpractice and offering a treatment that might be safe and effective but not the ideal treatment for a particular patient. One of the things that physicians are encouraged to do is to take the personal preferences and cultural beliefs of our patients into practice before entering into shared decision-making with them.

If that sounds like a mouthful, that’s because it is. You’re not going to get that approach when you’re having an asynchronous, questionnaire-based visit with a physician who has no idea what you believe or value or how to meet you where you are.

Yes. It’s legal and what patients want. I’m going to channel millions of parents of teenagers here. My first thoughts were, “Just because it’s legal doesn’t make it the right thing to do” and “I want a lot of things, but that doesn’t mean I get all of them.”

I’ve treated many patients who think they want something. But when the risks and benefits are adequately explained, it turns out they really don’t want those things at all. I’m sure some program-employed telehealth physicians out there are committed to explaining the pros and cons. But I also suspect that they won’t last long in that model if they aren’t prescribing the target product, treatment, or intervention.

Of course, this happens during in-person visits as well. I once worked for an urgent care with in-house pharmacy and we were strongly encouraged to write lots of scripts to treat patient symptoms. Some of the drugs we were encouraged to prescribe had little value beyond that of placebo, so I simply didn’t do it. Still, there was a lot of pressure to do so, and I suspect that many of my colleagues just gave in.

Not sure, but it’s puzzling that doctors do this. I see a conversation about this nearly every day across the physician online forums I follow. A lot of reasons are cited for working in these models. Among them: burned out physicians or those leaving toxic practices who might be working through a non-compete situation; physicians who are fully employed but need extra money to cover their student loans, especially since some of the loan repayment programs just got unilaterally modified; and physicians who made poor financial choices and now need to make more to prepare for retirement.

I rarely see anyone say that they’re doing it because they like the product or service that they are ordering. Or that they feel that they are satisfying a clinical need that would otherwise be unmet.

As for Mr. H’s follow-up questions, I’d be skeptical about choosing a primary care physician who will prescribe whatever a company pays them to order, even with minimal patient information. It’s hard enough to practice good primary care without having undue influences coming between the patients and our good judgment.

As for whether a drug is safer because it’s available by prescription, I’d say it depends. Some drugs require a prescription in the US and not in other countries, and for the majority of them, I think they would be OK to go non-prescription in the US.

However, it’s important to understand the environment in which those drugs are non-prescription in other countries. Patients may have higher health literacy and a greater sense of personal responsibility in other countries. Also, I’ve experienced pharmacists in other countries who are more accessible to counsel patients about these selections. 

Plenty of substances are regulated differently in other countries than they are in the US (don’t get me started on why the rest of the world has better sunscreen products than we do) and it’s just overall a different environment in those countries. Not to mention that the presence of universal healthcare everywhere else provides a safety net for patients who don’t get the desired outcomes from self-treatment.

It will be interesting to see the final poll results when they come in. Feel free to leave a comment when you vote on the poll, and as always, you are welcome to leave a comment here or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 7/17/25

July 17, 2025 Dr. Jayne 2 Comments

image 

It’s been one of those weeks where I’m pulled in so many directions I’m not sure which way I’m supposed to be going. Just when I think I’ve finished something, another obstacle turns up in my path and I have to swerve.

I’ve attended enough workplace resilience seminars over the years that pivoting from crisis to crisis seems second nature, even if it’s not fully in my comfort zone. Still, there’s something to be said for the excitement of doing the corporate equivalent of a “cattle guard jump” from time to time, so I’m happy to keep on keeping on.

From Universal Soldier: “Re: LLMs replacing physicians. What’s your take on projects like this?” Headlines abound for this kind of work, especially when the media talk about models achieving “diagnostic accuracy” or outperforming the average generalist physician.

The Microsoft AI Diagnostic Orchestrator (MAI-DxO) is claimed to deliver greater accuracy, but also to reduce diagnostic costs, compared to when physicians evaluate a patient. I don’t disagree with the fact that we need to figure out how to do workups efficiently and to improve cost savings, but I wonder about the ability to translate this work to bedside realities. Let’s inject some of the realities of the current state of medical practice into the model and see if it can come up with solutions.

We can add a medical assistant who is stuck in traffic and doesn’t arrive in time to room the first patient, increasing everyone’s anxiety level as the office tries to kick off a busy clinic session when they’re already behind before they even start. As the model suggests tests to order, let’s throw in some cost pressures when those interventions aren’t covered by insurance or the patient doesn’t have any sick time to cover their absence from work. Add in a narrow network that makes it nearly impossible to refer to a subspecialist even when it’s needed. Let’s add an influencer or two worth of medical misinformation to the mix. Now we’re getting closer to what it’s really like to be in practice.

It’s great to do tabletop exercises to see if we can make clinical reasoning better. But unless we’re also addressing all the other parts and pieces that make healthcare so messy, we’re not going to be able to make a tremendous difference. I would love to see an investigation of whether physicians can improve their clinical reasoning simply by having more time with the patient, or fewer interruptions when delivering care, reviewing test results, and formulating care plans.

I would also like people to start talking more seriously about how care is delivered in other countries, where  better clinical outcomes are achieved while spending less money. Maybe it’s just easier to talk about AI.

image

An informatics colleague asked me what I thought of the Sonu Band, which is a therapeutic wearable that promises “clinically proven, drug-free vibrational sound therapy” that has been proven to improve the symptoms of nasal allergies. The band is used in conjunction with the Sonu app, which works with the user’s smartphone to scan their face and combine it with voice analysis and a symptom report to personalize the therapy.

The company says that the facial scan produces skeletal data that is used to create a digital map of the sinuses. It then uses proprietary AI to calculate optimal resonant frequencies for treatment.

Having spent most of my life in the Midwest, I can attest that allergy and sinus symptoms seem to be nearly universal. I reached out to my favorite otolaryngologist for an opinion, and although he pronounced it “fascinating,” he hadn’t heard of it. If it works as well as the promotional materials say, I could imagine it flying off virtual shelves. If you’ve given it a whirl or seen it prescribed in your organization, we would love to learn more.

The American Academy of Family Physicians and its Family Practice Management journal recently reviewed some AI-enhanced mobile apps that target primary care physicians. This was the first time I had seen their SPPACES review criteria:

  • S – Source or developer of app.
  • P – Platforms available.
  • P – Pertinence to primary care practice.
  • A – Authoritativeness, accuracy, and currency of information.
  • C – Cost.
  • E – Ease of use.
  • S – Sponsor(s).

Even if you’re not in primary care (in which case you can feel free to omit the second “P”), this is a good way to encourage physicians to think about the sources of information they use in daily practice.

It’s not mentioned in the article, but the author also encourages physicians to be aware of whether their tools are HIPAA-compliant and whether they’re entering protected health information into third-party apps. He also mentioned that none of the apps reviewed are a substitute for physician judgment.

I would also consider adding an element to the “cost” criteria that encourages users to think about how the app is making money. People seem quick to overlook third parties that are monetizing user information, if they’re even aware of it happening at all.

I will use this as a teaching tool with students and residents, especially since they’re quick to download new apps without doing a critical review first.

I’m not sure how I missed this one, but OpenEvidence filed a complaint against Doximity last month, alleging that Doximity’s executives impersonated various physicians and used their NPI numbers to gain access beyond what they should have as lay people. Such activities are prevented by the OpenEvidence terms of use, assuming anyone actually reads them (they’re included in the complaint as Exhibit A if you’re interested).

The complaint alleges “brazen corporate espionage” and points out that Doximity “has built its brand on physician trust and privacy protection.” The defendants are alleged to have used prompt injection and prompt stealing to try to get at proprietary OpenEvidence code.

Pages 3 and 4 of the complaint describe a few examples of attacks in detail. The complaint notes that “this case presents the rare situation where defendants’ illicit motives and objectives are captured in their own words.” I always love reading a good court document and this one did not disappoint.

What do you think about corporate espionage? Can companies truly protect their intellectual property anymore? Leave a comment or email me.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 7/14/25

July 14, 2025 Dr. Jayne 3 Comments

There’s always a lot of buzz around wearables. The majority of US adults have a smartphone in their pocket or purse, so a treasure trove of data can be collected without adding a secondary device.

Most of the people I talk to have no idea how much information is being captured by the apps on their phones, let alone the types of entities to which vendors are selling their personal data. Nearly everyone I know leaves their location services on 24×7. About half the people I interact with, along with their families, use tracking apps to keep up with each other’s location.

An article in JAMA Network Open this week caught my eye with its title, “Passive Smartphone Sensors for Detecting Psychopathology.” The authors analyzed two weeks of smartphone data from 550 adult users see if “passively-sensed behavior” could identify particular psychopathology domains. They noted that this is important work because smartphones can continuously detect behavioral data in a relatively unobtrusive way.

They had two main objectives. First, to determine which domains of psychopathology can be identified using smartphone sensors. Second,  to look for markers for general impairment and specific transdiagnostic dimensions such as internalizing, detachment, disinhibition, antagonism, and thought disorder.

Data were pulled from global positioning systems, accelerometers, motion detection, battery status, call logs, and whether the screen was on or off.

The authors were able to link nearly all the domains with specific sensor-captured behaviors, creating “behavioral signatures” by measuring things like call volume, mobility, bedtime, and time at home. Specifically, they were able to link disinhibition with battery charge level and antagonism with call volume.

Based on the phone-related behaviors I observe, it would be interesting to see if my gut feeling about a user’s psychopathologic situation is accurate. I would also be curious to know if there is a difference in the data looking at other age groups that weren’t studied, such as teens or the elderly. Although the study was done on adults, the mean age was 38 with a standard deviation of 8.8, so there is certainly some opportunity to look in detail at other groups.

I was recently with a large group of individuals in their 70s. Their visible phone behavior would rank them right up there with the teenagers I know.

Reading about this made me think about all the data that companies are collecting now that they’re focusing on potentially eliminating remote work and ensuring high levels of productivity. There are plenty of stories out there about people using so-called “mouse jigglers” to make it look like they’re working so that their computers don’t go to sleep. Of course, companies that restrict what kinds of USB devices can be plugged in might be attuned to that, and there are also more sophisticated monitoring tools that also look at keyboard usage patterns and can detect if something shady is going on.

Remote work isn’t the only place people might be slacking off. I see plenty of people who have in-person jobs who constantly use their phones for potentially non-work activities. Many apps  might be adjunctive to job role and responsibilities, but I see a lot of online shopping and social media use as well.

I’d love to see some robust research that looks at communication and collaboration strategies within an organization to see which workers might thrive with one style more than another. I’ve worked in organizations that have documented communication plans that make it clear what kinds of work should be conducted using meetings, phone calls, email, instant messaging, and texting, but those kinds of policies are few and far between these days.

Even without a written policy, workplace culture defines how things get done, but when you’re a new person, a consultant, or a contractor, it can be difficult to try to figure that out unless someone clearly explains the rules of engagement.

I worked in one organization that basically used Slack as the connective tissue of the organization. I have to admit that I struggled there. Every time I asked where to find a resource, the answer was, “It’s in Slack,” but it didn’t seem like there was any rhyme or reason to how things were organized. More often than not, important documents were accessed through links within a message thread rather than being in a “files” area or in specific channels that made sense to those of us who were new.

A tremendous amount of work seemed to get done via direct messages rather than channels, making it even more difficult to find things. At one point, during a critical issue with a release, I had a separate cheat sheet of which conversations to look through when I needed certain kinds of information, since I had an endless list of direct message conversations with various combinations of the same group of people.

When I asked if there was any team- or company-level documentation on how it was all supposed to work, I felt like I was revealing myself as someone who simply couldn’t keep up. As a consultant, I had multiple conversations with leaders at the company about how this was working and how I had seen it contribute to process defect rates and rework. I also knew of plenty of examples at that company where people downloaded documents to their own hard drives so they could find things later, but who then ended up working off of outdated specifications since they were using local copies rather than shared ones. Not to mention that if people can’t find clear information, they are more likely to improvise or otherwise wing it, which is generally a bad idea when you’re building healthcare software.

If you could use data to find scenarios where someone was working on a deliverable – say, a slide deck or a document — and then spent 10 minutes rapidly flicking through various file structures or messaging platforms, opening and closing multiple documents, and doing web searches before finally returning to the document, it could be an indicator of disordered work patterns that might benefit from some kind of intervention.

If you see multiple people on a team with these work habits, that may be indicative of the need for a different kind of organizational structure for work product and other materials. I think those patterns are much more important to explore than knowing whether someone’s mouse is moving

What do you think about looking at smartphone or other device data to learn more about people’s behavior and the potential for psychopathology? Would having more information make things better or potentially make things worse? Leave a comment or email me.

Email Dr. Jayne.

EPtalk by Dr. Jayne 7/10/25

July 10, 2025 Dr. Jayne 3 Comments

image 

I finally had time to dig into the recent paper about the “accumulation of cognitive debt” that happens when using AI assistants.

As a proud member of Generation X, I first experienced those rites of passage called “the five-paragraph essay” and “writing a research paper” in middle school. My English teacher  — this was before everyone called it Language Arts or something else more inclusive — made us create a 3×5 index card for every reference. We had to have cards for every quote or idea we planned to use. For those of us whose brains were wired for reading and writing, it was a painful process. We just wanted to jump in and start writing. However, for others it was an exercise in organizing thoughts and making sure to have enough materials to support your conclusions.

Fast forward to my university days, when I was a teaching assistant for an English 101 “Thinking, Writing, and Research” class. Those pesky index cards were still recommended, although not required. Personal computers had just made their way into dorm rooms, but as I graded research essays, I could easily tell who knew how to organize their thoughts and who was simply phoning it in.

The professor I worked with always selected obscure topics for the assignments, so it was nearly impossible to copy the work of others. That made grading all those essays quite an adventure. This was the era when those with computers had to figure out how to best use them on an as-you-go basis, because there certainly weren’t any classes offered that explained the best ways to use various pieces of software. Subsequent generations always had access to computers for schoolwork, so I’m not sure how much of the process aspect of writing is still taught versus enabling people to just sit down at the keyboard and get to it.

Within that context, I started reading the paper. It looked at how three cohorts completed an essay writing task. LLM-only, search engine-only, and brain-only groups completed three writing tasks using their assigned method. They then had a fourth task where some of them were crossed to another group. The participants were monitored with electroencephalography (EEG) to assess the cognitive load during the tasks. Additionally, essays were assessed using natural language processing, scoring with the assistance of a human teacher and scoring by an AI judge.

The authors concluded that the brain-only group had the strongest brain connectivity, followed by the search engine group. The LLM group had the weakest connections. Additionally, participants in the LLM group had lower self-reported ownership of their essays and had difficulty quoting their own work. Ongoing analysis showed that “LLM users consistently underperformed at neural, linguistic, and behavioral levels.”

The authors commented, “These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning.” Given some of the personal statements that I’ve read for medical students over the last two years, there’s so much LLM use that it’s hard to get a feel for who the candidates really are as people. Maybe this research will convince folks to dial it back a bit.

image

I enjoy learning about new players on the healthcare IT scene. One that I’ve been watching in recent months is CognomIQ. The company’s semantic-based data management solution has been optimized for healthcare, in particular for research institutions.

The company originally caught my eye when I heard that industry veteran Bonny Roberts had joined the team as VP of customer success. She’s a long time HIStalk fan and served as co-host of the final HIStalkapalooza back in the day. I trust her to recognize the real thing.

The company’s CTO, Eric Snyder, can discuss the importance of data without succumbing to industry buzzwords or getting bogged down in jargon. He recently delivered a guest lecture for a data and visualizations class at the University of Rochester. He followed it up with a social media post on data literacy and the problems that happen when different parts of the healthcare system describe parts of the care continuum in different terms.

My favorite quote: “I struggle with the answer to the data literacy in healthcare problem because it’s like creating a second floor of a house when the first floor is propped up on sticks. We never solidified the foundation as an industry, instead we moved on to AI.”

I wish more people in the industry understood this way of thinking. I would even go a step farther to say that we’ve built a house of cards and now we’re putting AI on top of it, but I’m trying to be less cynical. Those of us on the patient care front lines have spent the last quarter century creating a tremendous volume of patient-related data that is just floating around and isn’t helping organizations reach their potential. I think of all the wasted hours of clinicians clicking and the back-end systems being unable to do anything useful with the data because of  lack of standardization or inconsistent standards.

Snyder has spent the better part of the last decade leading technology innovation work at the Wilmot Cancer Institute and understands the importance of data to solve complex problems. The platform can aggregate hundreds of data sources and transform it in an automated fashion, which sounds awfully attractive to those of us who have had to engage in weeks or even months of cleanup prior to embarking on reporting or research efforts.

I also have to give a shout out to the company’s CEO, Ted Lindsley, whose LinkedIn profile boasts, “Healthcare Data that doesn’t suck.” Honestly, seeing that made my little informatics heart go pitter-patter, because it’s incredibly refreshing to see someone who is excited about what they do and is ready to express it in no uncertain terms.

I reached out to Ted to learn more. He was willing to entertain my anonymous inquiries. Recent highlights include the company coming out of stealth mode, showcasing its work at the recent Cancer Center Informatics Society Summit, and announcing its seed round. He had some great analogies about technology leaping forward and had me laughing about moving from MS-DOS and Windows 3.1 to Windows 95, even though my ability to talk about that transformation likely betrayed my age. He’s certainly no stranger to the work that needed to give the industry a kick in the pants and get it moving ahead. I’m looking forward to seeing where CognomIQ goes this year and beyond.

image

The last couple of weeks have been pretty exhausting and free time has been scarce, so I had to rely on an AI-generated cake in celebration of this being my 1500th post. I was hoping to whisk myself to a beach to celebrate, but instead I have to make it through another major upgrade first. When I was a young medical student sitting down at a green-screen terminal to access lab results, I never imagined writing about my experiences with healthcare IT, let alone there being people who would read it on a regular basis. Thanks for supporting my work, and a special thank you to those readers who share their comments and ideas so I can keep the words flowing.

Email Dr. Jayne.

Curbside Consult with Dr. Jayne 7/7/25

July 7, 2025 Dr. Jayne 1 Comment

image 

I’ve spent a lot of my career working on the “softer” side of clinical informatics, such as change management, governance, adoption, and optimization. Although I’ve implemented a couple of technologies in my career that have been dramatic, most of the time I’m working on projects that are a little more subtle.

I’m appreciative of projects like that when I have to gain buy-in from difficult stakeholders. When they don’t feel like you’re yanking the carpet out from under them, they are more likely to align with the goals and objectives. On the other hand, sometimes when projects are too low-key they’re not perceived as valuable. It’s a fine line that has to be walked.

I can’t even count the number of practices where I’ve helped implement EHRs over the years. I’ve worked with people ranging from those who have never used computers prior to the EHR to those who have been using them since birth.

In the early days of EHR, people used to talk about the “older” physicians being resistant. Fortunately, I had a good story to counter that after meeting a curmudgeonly colleague who informed me that he had been “advocating for electronic charting since long before you were born, young lady.” He and I actually competed for the first EHR-related role in our health system. I think he was a little grumpy that he didn’t get the position. I grew to appreciate his point of view as he pushed back on some of the things we were trying to do, because he always wanted to make things just a little bit better.

I’ve also worked with younger physicians who were incredibly resistant to adopting technology, particularly anything other than the one that they personally felt was the best. There’s nothing quite as entertaining as watching an Apple devotee argue with the IT team about how he absolutely, positively cannot use the PCs that are present in every shared workspace in the hospital. Folks like that were especially fun during the early days of “bring your own device” programs. They demanded to be able to use hardware that didn’t comply with the published standards.

I’ve worked with ER physicians who complained about how long it took them to do their charts, yet were found to be spending a good chunk of their day on the Zappos website. 

These examples show how differing perspectives and experiences can have a tremendous impact on the success of a project. In turn, how those outcomes can ultimately influence the patient experience. When you have one physician in a practice who refuses to do the recommended workflow, it can cause extra work for the staff. It can also result in confusion and delays for patients who are waiting for their results or for a response from the physician.

I’ve long wondered what makes one person think a new solution is awesome and another one thinks it’s awful when they are doing the same work and caring for the same patients. An informatics colleague and I were talking about this over a recent round of cocktails. She brought up a recent study from the Proceedings of the National Academy of Sciences that looked at how different people perceive works of art.

Although I lived with an art history major for a number of years, I hadn’t heard of the concept of the “Beholder’s Share,” where a portion of a work of art is created by the memories and associations of the person viewing or experiencing it. I suppose it’s a more academic rendering of the idea that beauty is in the eye of the beholder.

The researchers behind the article employed high tech means to look at it, however, using functional MRI (fMRI) imaging to identify how people used their brains differently when viewing different types of art. Apparently abstract art results in more person-specific activity patterns, where realistic art delivers lass variable patterns. They also noted activity in different parts of the brain when looking at abstract art. 

I’d love to see how different end user brains would react to differences in EHR screens and workflows. Maybe we could use that information to better predict how users will perform with different tools. Instead of looking at a subject’s brain activity while looking at a Mondrian painting, as the study did, we could see how their brains perform when confronted with different user interface paradigms.

I’ve seen EHR and clinical solution designs over the years that were jarring in color or layout. I’ve seen those that were so vanilla that nothing seemed to catch the user’s attention.

Another concept in the art world is that of shared taste. It explains why some groups of people prefer the same things, where others might find them objectionable. People typically know if they prefer art from classical times, the Renaissance, the Impressionists, or from abstract or modern artists, I would bet that we can create groupings around different types of clinical data visualization and how they can best be used in patient care.

Similarly, I would be interested to see if users who have certain sentiments about a given piece of technology can be grouped in a particular way, such as by specialty, user demographics, location, or tone of the program where they completed their training. Similar to the concept of precision medicine, I wonder if we could use that information to create precision training or a precision technology adoption curriculum that could help users adapt to new tools that end up in their workflows.

Even without the expense and risk of something like fMRI scans, I would bet that we could do a lot in clinical informatics to better understand our users and the learners with whom we are engaging. I’ve seen quite a few surveys that ask new employees about their experience with electronic documentation or technology in general, but they are fairly superficial. They usually have questions like, “Which of the following systems have you used?” with a list of vendor names. They don’t recognize if the user was on a heavily customized version or an out-of-the-box configuration. Most users wouldn’t know anyway unless they have experience behind the informatics curtain. 

Institutions have come a long way recognizing different learning styles and whether people prefer classroom, asynchronous, or hybrid learning methods. I don’t doubt that the training and adoption efforts that we see today might be supplanted by other paradigms in the future.

Is the beauty of the EHR in the mind of the beholder, or is it something with which users simply have to cope? Is one platform more abstract than the other? Will we ever see an EHR with a classical sense of style? Leave a comment or email me.

Email Dr. Jayne.

Text Ads


RECENT COMMENTS

  1. I saw that one, but I didn’t see anything about wrongful death or specific issues with medication displays within Epic.…

  2. Lawsuit details: https://healthapiguy.substack.com/p/aadj-v-epic

  3. These AI Tools should be invisible to the user. It should just be integrated into the workflow. Emmie is really…

  4. We should stop using names for things in general. Someone might accidentally have fun.

  5. I wonder to what degree AI is going to directing/assisting Amazons One Medical providers. If its a text-based encounter, how…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.