Curbside Consult with Dr. Jayne 11/24/25
I wrote earlier this month about an article that examined whether physicians think their peers who use AI are less competent. I brought this up in a recent conversation with other clinical informaticists to see what they had to say.
The responses were interesting. Although the general answer was “it depends,” opinions differed depending on the type of AI.
Many of the individuals who were part of the conversation are knee-deep in AI solutions as part of their work. They have a different level of understanding of exactly what constitutes AI compared to others who aren’t as engaged with the technology.
For most non-generative AI solutions, the group had a level of comfort that was commensurate with the time that the solutions have been in use. No one questioned the utility of AI in situations where pattern recognition is key, such as in the review of cervical cytology specimens or in diagnostic imaging. No concerns were voiced about AI-powered search tools that help clinicians dig into large data sets and provide verbatim answers.
Peers also raised no concerns about AI being used for natural language processing tasks, as long as the systems are non-generative. These can be used for analyzing the output of interviews or feedback sessions and have been used for years. One colleague specifically called out spam filters, challenging people who are afraid of AI to go a couple of days without one to see how they like it.
Another colleague mentioned a “smart buoy” that is located on a lake near his home. It determines if it’s safe to swim by monitoring temperature, wind, water pH, and turbidity while analyzing the correlation of those elements to bacterial counts.
As far as generative AI, people were generally positive about AI-assisted responses to patient portal messages, as long as the system requires a clinician to click the send button to indicate that they read the response and agree with it.
They were less confident about AI-assisted chart summarization tools because of the potential liability if data elements are missing or incorrect.
Some good discussion arose around the fact that it’s a trade-off since humans might miss or misinterpret something when reviewing bulky charts. Studies of this are not widely known in some clinician circles. Everyone agreed that we need better data that compares the performance of AI versus humans for specific tasks to better understand the risk-benefit equation.
The conversation drifted away from patient-facing generative AI to the tools that clinicians are using as they complete their Maintenance of Certification (MOC) activities. In response to the question of whether peers perceive physicians who use AI tools as less competent, one person noted, “If you’re not using AI to do your MOC, you’re crazy.” Maintenance of Certification questions often take the form of a block of questions that must be answered quarterly, or annually in some circumstances, and many physicians feel that it’s a make-work activity that doesn’t necessarily reflect the realities of their practice or expertise.
For example, in family medicine, the questions cover the whole scope of the specialty, even though most family physicians tailor their practices to include or exclude certain procedures or populations. The majority of us don’t provide obstetric care. Those who practice in student health clinics likely don’t see patients in the geriatric demographic. Some don’t see infants and young children. Some practice exclusively in emergency or urgent care settings.
Some who are in full-time clinical informatics had to give up clinical care due to lack of access to appropriate part-time opportunities. They are required to maintain their primary certification to retain board certification in clinical Informatics. That creates a significant burden for those who aren’t still seeing patients.
For those who have stopped seeing patients, MOC is a “check the box” activity. Most boards allow users to answer the questions in an open-book format, so using AI tools is a natural evolution. They help physicians get to their answers faster, just like they would in the clinical world, although in this case they’re helping reduce an administrative burden.
No one in the conversation had seen any specific prohibition on using AI tools to answer the questions. The only limitations are that you can’t discuss the questions with another person and you must answer them within the provided time limit.
All agreed that a pathway is needed for those who boarded in clinical informatics to allow their primary board certification to lapse after some amount of time. However, they also agreed such a change is unlikely before their anticipated retirement.
When asked specifically about using AI to create notes, such as with an ambient documentation solution, no one admitted to thinking badly about clinicians who do so. There was a general consensus that ambient documentation solutions are one of the few things that CMIOs have rolled out that generate thank you notes rather than emails of complaint and that the technology isn’t going away anytime soon. The concerns were more about the cost of the solution.
Some spirited discussion was raised about whether they will have a negative impact on physicians in training. Some firmly asserted that learning to write a good note is essential for physicians and that the note-writing process serves as a reasoning exercise. One residency program director noted that several applicants have asked him if residents are allowed to use the technology, so it may become a differentiator as candidates assess potential programs.
Anecdotally, I don’t think patients think worse of physicians who use AI solutions. A friend recently reached out with his experience. “I just got back from my annual visit with my PCP. He’s using some new AI tool that transcribes the entire conversation during the visit, then cobbles the important parts together in the after-visit summary. It was done cranking that out in the time it took him to listen to my lungs and look in my ears and down my throat, and everything was correct. It even transcribed non-traditional words like ‘voluntold’ correctly.”
As a patient who has had inaccurate notes created by physicians who were in a hurry while charting, I would prefer AI if it meant not having imaginary exam elements added to my chart.
It’s always gratifying to meet with others who are doing work in my field and to learn how those from different institutions approach a problem differently or have different outcomes. I wish I could have those kinds of robust conversations more often, but I’ll have to settle for only having the opportunity a couple of times a year.
If you had a group of clinical informaticists captive for an hour, what topic would you want to see them discuss? Leave a comment or email me.
Email Dr. Jayne.





















"most people just go to Epic" that's a problem because then EPIC becomes a monopoly in healthcare, if it isn't…