Healthcare AI News 11/12/25
News

The Vatican convenes a Rome conference this week titled “AI and Medicine: The Challenge of Human Dignity,” where a church official warned of the risk of “transforming health and illness into mere numerical data … the ability to personalize treatment remains an irreplaceable medical skill.” In his remarks to participants, Pope Leo XIV urged healthcare professionals to use AI responsibly, emphasizing that healthcare cannot be reduced to problem-solving and that technology must not interfere with the patient–caregiver relationship. He concluded by cautioning that “vast economic interests are often at stake in the fields of medicine and technology, and the subsequent fight for control.”
Microsoft forms an MAI Superintelligence Team to develop AI that exceeds human capability, with medical diagnostics being its first focus area.
OpenAI is reportedly considering entering the consumer health market, such as creating a personal health assistant or health data aggregator
Business

Sentara Health will implement Andor Health’s agentic AI virtual care software at its 12 hospitals, starting with virtual nursing, virtual sitting, remote consultations, and transactional care management.
InterSystems launches HealthShare AI Assistant, which provides a conversational chat user interface for its HealthShare Unified Care Record.
Research

A small study of patients in Africa finds that frontline nurses and community health workers can identify patients who are at risk for reduced ejection fraction heart failure by using Eko Health’s AI-assisted stethoscope.
A Black Book Research survey finds that most US hospitals are underfunding AI governance even as adoption accelerates. Only 22% say they could deliver an auditable AI explanation to regulators or payers within 30 days, citing lack of vendor explainability as the biggest barrier.
Other
A TV station’s test finds that while ChatGPT and Gemini answered health questions with disclaimers that they aren’t real people or licensed professionals, AI storytelling platform Character.AI displayed a similar warning but then falsely claimed to be a real doctor, giving the user a fake name with a valid medical license number that belongs to a Los Angeles immunologist. The company says that user-created characters are fictional and for entertainment only, which is why it includes the disclaimer.
The American Nurses Foundation (ANF) partners with Hippocratic AI to fund three nurse-led grants of $10,000 each for experienced frontline nurses to explore AI and innovation in nursing.
A healthcare empathy professor says that while AI can generate empathetic-sounding written responses, the real issue is that a broken healthcare system has drained clinicians of empathy through paperwork, burnout, and rigid protocols, effectively turning them into machines. He warns that we are moving toward an ironic world where AI takes over the parts of care humans do best, while humans are left doing tasks that computers should handle. He concludes:
The technology will continue advancing, regardless. The question is whether we’ll use it to support human empathy or substitute for it and whether we’ll fix the system that broke our healthcare workers or simply replace them with machines that were never broken to begin with.
Psychiatrist and political anthropologist Eric Reinhart, MD argues that when AI is installed in “a health sector that prizes efficiency, surveillance, and profit extraction,” it becomes just another tool for commodifying human life. He adds that AI can’t improve medicine by leapfrogging structural change, but it does give policymakers and corporations an excuse to ignore abysmal public health and hospitals a way to squeeze more profitable productivity out of doctors. He says:
We risk entering a perverse loop: machines are supplying the language with which patients relay their suffering, and doctors are using machines to record and respond to that suffering. This cultivates what psychologists call “cognitive miserliness”, or a tendency to default to the most readily available answer rather than engage in critical inquiry or self-reflection. By outsourcing thought, and ultimately the most intimate definitions of ourselves to AI, doctors and patients risk becoming yet further alienated from one another.
Contacts
Mr. H, Lorre, Jenn, Dr. Jayne.
Get HIStalk updates.
Send news or rumors.
Follow on X, Bluesky, and LinkedIn.
Sponsorship information.
Contact us.

Re: Dr Z. Great story, but whatever happened to professional courtesy???