Healthcare AI News 1/28/26
News
OpenAI introduces Prism, a free ChatGPT-based workplace for scientists to write and collaborate on research.
A Louisiana news site reports that LCMC Health has removed its patient consent disclosure stating that it uses Nabla for ambient documentation. The organization’s compliance department determined that patient consent is not required for other types of note-taking and therefore is not needed for an AI scribe. Louisiana law requires only one-party consent for audio recording, which in this case would be the provider.
Testing finds that the latest version of ChatGPT cites sources that were themselves generated by other AI tools, including Elon Musk’s AI-created encyclopedia Grokipedia, which has been accused of promoting right-wing narratives on controversial topics. Experts question whether AI tools can be trained to ignore AI-generated content that may be incorrect, leading to recursively less accurate information. When asked by a news outlet about a fabricated quote that was attributed to the site, an XAI spokesperson responded, “Legacy media lies.”
Business
The Guardian warns that Google’s AI Overviews could pose a public health risk because they summarize search results that may be inaccurate or low quality. A study of health-related queries found that AI Overviews rely heavily on content from Google’s YouTube that anyone can upload. Experts caution that users may accept the summaries at face value, and that even when summarizing medical literature, the tool can’t assess the quality of research.
Research

A Wolters Kluwer survey finds that 58% of nurses use generative AI in their personal lives and 46% at work. Nearly half believe that AI could reduce nurse burnout by automating documentation, triaging patient questions, and streamlining workflows, while 62% say that using AI for onboarding and training can get new nurses onto the floor faster. Most report that their organizations lack formal AI policies or training.
A small UCSD Health study finds that clinicians generally view Epic’s EHR-integrated LLM chart review tool as useful for summarizing patient records, even though it frequently misses relevant details and occasionally hallucinates, requiring careful human verification. The authors conclude that such tools can augment workflows, but are not reliable enough to be used without clinician oversight.
Researchers believe that agentic AI systems could help hospitals prepare for extreme climate events that fall outside of emergency planning assumptions.
A study finds that of the 42% of US hospitals that use Epic, 62% have implemented ambient documentation. Adoption was significantly higher in metropolitan and government-operated hospitals and much higher in non-profit versus for-profit hospitals.
Other

ChatGPT Health gives the Washington Post’s technology columnist an F for cardiac health after analyzing a decade of his Apple Watch data, a conclusion that his physician and Eric Topol, MD, say is wrong. When he repeated the test with Anthropic’s Claude for Healthcare, it assigned a C, although both tools changed their grades when he repeated the same question. He also notes that his resting heart rate reports a significantly different number each time he upgrades his Apple Watch. Topol concludes that, “You’d think that they would come up with something much more sophisticated, aligned with the practice of medicine and the knowledge base in medicine. Not something like this. This is very disappointing.”
Contacts
Mr. H, Lorre, Jenn, Dr. Jayne.
Get HIStalk updates.
Send news or rumors.
Follow on X, Bluesky, and LinkedIn.
Sponsorship information.
Contact us.
![]()
If they haven't coordinated with the patient safety team, they have by definition "dropped the ball". And, given the reported…