EPtalk by Dr. Jayne 4/9/26
New survey results from The Ohio State University Wexner Medical Center show a 10% decline in support for the use of AI in healthcare over the last two years. Despite that shift, respondents say that they are using AI to explain test results, research symptoms, compare treatment options, and prepare for medical appointments.
Researchers also found a decline in the belief that AI can improve healthcare efficiency.
Physicians sometimes tire of fielding medical questions from friends and family, but I would much rather have them reach out to me instead of consulting general purpose AI tools that might give them bad information. I may not know all the answers to their questions, but I know where to look for good information, and I will encourage them to discuss their questions and concerns with their care teams.
I was surprised to learn this week that the Agency for Healthcare Research and Quality (AHRQ) has been largely inactive this year. The organization has spent none of its appropriated funds after returning a good chunk of its 2025 funding.
In hindsight, I shouldn’t be surprised. So many healthcare research projects have been defunded over the last year, and AHRQ headcount has been reduced from 300 to 90.
AHRQ was created in 1999 to investigate how to make healthcare safer and better for patients, which seems like a noble goal. Studies in flight haven’t received funding, including those that focus on telehealth for Medicaid patients and on reducing unnecessary antibiotic use. It’s hard to practice evidence-based medicine when new evidence isn’t being produced.
Based on some of the other issues that are discussed in the write-up, AHRQ is likely at the end of its lifespan, unless something drastic changes in the world of healthcare policy.
I love this piece in Nature that discusses a project in which a researcher created a non-existent disease, wrote it up in two “obviously bogus academic papers,” uploaded the papers to a a preprint server, and then watched to see if AI chatbots would pick them up.
She described the made-up condition “bixonimania” as including painful, itchy eyes after spending too much time staring at screens. She chose the name because it sounds ridiculous, explaining, “I wanted to be really clear to any physician or any medical staff that this is a made-up condition, because no eye condition would be called mania – that’s a psychiatric term.”
Not only did LLMs pick up the information, citations of the bogus papers appeared in peer-reviewed literature.
She included clues in the fake papers, such as an acknowledgement to a professor at Starfleet Academy who worked on the USS Enterprise and a note that the paper was funded by “the Professor Sideshow Bob Foundation for its work in advanced trickery.” The text also included a statement that “this entire paper is made up.”
Shame on folks who use AI to create references when they didn’t read the primary material, and an eyebrow raise to Copilot, Gemini, Perplexity, and ChatGPT. I asked my LLM friend Claude, who explained the fakery and also added that the fictional author’s Slavic-sounding name Lazljiv Izgubljenovic translates to “The Lying Loser.”
I was pleased to see that an ethics advisor was involved in the project since the author was concerned about potentially injecting a fake illness into the literature. If nothing else, the effort reinforces the need to remain vigilant and to actually read the source material.
Mr. H launched a poll earlier this week asking if readers would reject a job candidate who used AI to create their resume, emails, or headshot. He also questions how one would know that AI was used.
Companies are using AI to screen candidates, so I wouldn’t penalize a candidate who uses AI to try to beat me at my own game. These days, resumes need to be tweaked to align with the posted job description. It feels like the days of candidates who aren’t a fit for a particular role having their resume “kept on file” are long past.
I do think less of candidates who use AI in a blatantly obvious manner, such as failing to correct distortions in their photos or editorial mistakes. I wouldn’t rule them out, but I would rank them lower than others, assuming other factors were equal. If they don’t edit a document that is intended to demonstrate their best work, it makes me wonder what their day-to-day work product might be like.
We’ve all heard of information blocking, but today I learned about “workflow blocking.” A Viewpoint article in the Journal of the American Medical Association uses the term to describe a situation where a lack of workflow integration for new tools hinders effective use.
The authors note that this gap is due to the limits of interoperability policy as well as tools that are difficult to use. Together, these factors influence whether a tool will actually help clinicians or add to their burden. The piece discusses other challenges that contribute to the problem, such as proprietary APIs, poor API documentation or authentication requirements, and onerous contractual requirements.
The authors address the market dynamics that are involved in EHR integration. They note “patterns in which EHR vendors initially partner with third-party developers to enable early workflow integration, then later introduce their own competing tools that benefit from deeper embedding and preferential positioning within clinical workflows.” They also observe that EHR vendors may design clinician workflows where using third-party tools requires extra clicks or authentication steps.
I’m a big fan of the concept that having competition makes people work harder to improve the quality of their products. There is also something to be said about having a product that is so desirable that you don’t have to resort to those maneuvers to keep competition out. We all know that’s not how the industry works, but it’s still nice to dream about being able to take the high road.
The authors call on policy makers to understand the difference between data being available and actually being usable during clinical care. Tools that allow access after the patient visit has concluded aren’t as useful as those that allow real-time access. They also call for requirements that vendors report integration issues, making workflow blocking more visible.
What do you think of workflow blocking? Have you seen it in action? Leave a comment or email me.
![]()
Email Dr. Jayne.

One solution would be federal (or state) requirements; but more ideally health systems would view their digital patient tools as…