The poem: Well, it's not it's not the usual doggerel you see with this sort of thing. It's a quatrain…
Healthcare AI News 10/18/23
News
OpenAI adds voice capabilities to the ChatGPT mobile app, which can carry on a conversation with the user through one of several available natural-sounding voices. Existing voice assistants such as Alexa, Siri, Google Assistant, and Cortana will need to follow quickly or risk becoming instantly obsolete.
Rumors suggest that OpenAI will soon launch AI-powered autonomous agents, which can interact with other software such as email or calendars to complete tasks without supervision.
Vanderbilt University Medical Center develops an EHR-embedded AI tool to identify pediatric patients who are at risk for blood clots. Outcomes were no better than for patients in the control group, however, which the researchers say may be due to physicians rejecting the tool’s recommendation to start blood-thinning therapy only 25% of the time due to fears of causing a major bleed. Yale medical school dean F. Perry Wilson, MD, MSCE observes that the VUMC study illustrates a key point that making accurate predictions is table stakes that vendors shouldn’t launch an “accuracy arms race” since “even perfect prediction is useless if no one believes you or if they don’t change their behavior.”
Business
UAE-based M42 launches Med42, a clinical large language model that answers medical questions using synthesized knowledge. The company claims it outperforms ChatGPT 3.5. It’s free for non-commercial use and research and can be downloaded from Hugging Face.
Research
ChatGPT performs at least as well as primary care physicians in choosing treatment options for newly diagnosed depression, and unlike doctors, it does not exhibit gender or socioeconomic bias. ChatGPT nearly always recommends psychotherapy for mild cases versus the 4% of PCPs who do so. It also favors using antidepressants alone for severe cases, while doctors usually add anxiolytics or hypnotics.
A Klick Labs study finds that AI analysis of a 10-second recording of a person’s voice can determine if they have Type 2 diabetes with nearly 90% accuracy. The company says that the non-intrusive, accessible approach will allow screening large numbers of people for Type 2 diabetes and potentially other chronic conditions.
Researchers are surprised to find that ChatGPT does a poor job of reviewing urology residency application letters.
Other
Experts predict that AI will play an outsized healthcare role in Canada because of clinician shortages in rural areas, with initial use in creating clinician documentation, providing easier access to patient records, helping with staff scheduling, and reviewing digital images.
Healthcare AI experts list ways that AI can help in healthcare, which include providing patient-friendly explanations, performing back office functions, predicting next events in a patient’s journey, analyzing images, keeping patient data private, and training itself. On the downside, it can create disinformation that sounds plausible, introduce bias in agreeing with what the user says, provide a false sense of security, and cause professionals to question whether their knowledge or careers are obsolete.
Contacts
Mr. H, Lorre, Jenn, Dr. Jayne.
Get HIStalk updates.
Send news or rumors.
Contact us.
Interesting re: Vanderbilt’s use of AI in identifying pediatric patients at risk of blood clots. We had previously supported similar work at CHOP to identify pediatric patients who would be candidates for prophylactic treatment (https://pubmed.ncbi.nlm.nih.gov/28815363/). Interestingly, the AI performed well then (2017) as now – the issue seems to be uptake.
re: ChatGPT and depression- though this is a study from Israel, so I dont know what access to psychotherapy is there, you have to take context into account for PCP recommendations. If there is no access to psychotherapy , either bc of $ or people, as there often is in the US, not recommending it is actually the right decision.