Home » Headlines » Currently Reading:

Morning Headlines 1/20/26

January 19, 2026 Headlines 3 Comments

Ambulatory EHR Excellence 2026: Top Vendors Recognized for Breakthrough Innovation and Specialty User Satisfaction in Black Book 2026 Surveys

Black Book Research releases its 2026 list of top-ranked EHRs for ambulatory care, with ModMed, Epic, and NextGen Healthcare noted as leaders in multiple categories.

VHA lacks ‘formal mechanism’ for mitigating clinical AI chatbot risks, watchdog says

A VA Office of Inspector General report finds that the Veterans Health Administration doesn’t properly oversee its use of generative AI chatbots, potentially compromising patient safety.

Community Care of North Carolina Partners with Innovaccer to Advance Value-Based Care Delivery for Independent Primary Care Providers in North Carolina

Community Care of North Carolina selects Innovaccer’s Healthcare Intelligence Cloud for population health analytics.



HIStalk Featured Sponsors

     

Currently there are "3 comments" on this Article:

  1. If only more health systems were as transparent as the VA. We get a helpful look at the VA’s inner workings, plus more inspections, because they’re a government organization. For that reason we also see more critical news about them, not because they’re dropping the ball in this instance. They’ve been relatively open about their AI tool selection process, and the risks their OIG identified are hardly confined to the VA alone.

    • I’m generally in favor of fairness and withholding judgment. However, in the context of the Oracle EHR’s $100b of waste, fraud, and abuse, I think it’s a reasonable default assumption that they’re dropping the ball until proven otherwise. Even giving them every benefit of the doubt, they’re trying to run (with AI), when they haven’t walked (a functional EHR) for 8 years and counting.

      • If they haven’t coordinated with the patient safety team, they have by definition “dropped the ball”.

        And, given the reported problems with the data in both the new and old systems I have no confidence that such data would be sufficient for AI to perform at the ‘regular levels of failure’ we see in other AI solutions. Hopefully the AI isn’t trying to digest data across VISNs as there is significant variance between VISNs that is only ‘rectified’ in the data warehouse — which you would presume the AI systems don’t have access.

        And still, who is doing the testing on such systems, the AI team? If the two groups don’t have a risk registry and risk mitigation mechanism then they really are just drinking the AI cool-aid. Even presuming that by function level testing was conducted, how are they performing monitoring of each of those functions to assure that the AI solutions haven’t deviated from their original ‘certification’.

        VHA Watcher does have a point about transparency, we have little idea what other solutions are doing to validate their AI integrations and given most of the data scientists involved don’t have a clinical background, you would presume it is through User testing. Nor do we generally know what issues they have come across that have been sent back to the vendor for remediation.

Text Ads


RECENT COMMENTS

  1. There was a time when my company went through multiple rebrands. These were relatively minor shifts, but completely unnecessary. It…

  2. It’s so funny watching Lotus Health get 40 million in the same week Carbon Health declares bankruptcy. There’s a sucker…

  3. Re: Oracle's Clinical AI agent. It's so real that at the 1:34 mark of the video, they decided to use…

  4. "A simple search on the named authors (when presented) reveals another carefully concealed attempt at Epic influence..." The site is…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.