That colorful bull reminds me when Cerner had a few of these made and mooved them around KC. it was…
Healthcare AI News 10/25/23
News
Washington University in St. Louis launches the AI for Health Institute, which will involve a collaboration of its engineering and medical schools.
NPR Shots covers doctors using AI for diagnosis, including a Mass General infectious disease doctor who is using the AI-enhanced version of Wolters Kluwer’s UpToDate — NPR calls it “Google for doctors” — that can conduct a conversation rather than simply looking up keywords in medical references. Other AI uses mentioned are interpreting diagnostic images, summarizing a patient’s medical history in preparation for their appointment, and allowing AI-assisted primary care physicians instead of specialists to care for some patients.
Business
Amazon begins rollout of AI-powered warehouse robots that it predicts will speed up order fulfillment by 25%. The system’s robotic arms and computer vision identify the correct item and send it to workers, which speeds throughput and places items at waist level to reduce worker injuries.
Research
A Stanford medical school study warns that AI chatbots may worsen health disparities for black patients by perpetuating race-based medical ideas that are known to be inaccurate. When asked to describe differences in skin thickness, lung capacity, and kidney function for a black man, the chatbots provided erroneous information instead of accurately stating that no differences exist. Some experts question the usefulness of the study’s conclusions since doctors shouldn’t be using chatbots to make medical decisions.
Researchers create software that uses AI to enhance CT images to produce MRI-quality information, which they say could be used to improve diagnosis in primary care.
Other
A Nature article predicts a rise in “generalist medical AI,” in which models that are trained on large data sets can mimic the broad analysis of a physician instead of performing a single function. It notes these challenges to AI in healthcare:
- AI can’t be trusted to make unreviewed decisions.
- Early medical imaging AI tools were trained on available imaging data that allowed them to perform tasks for which doctors don’t need help, such as detecting pneumonia.
- Individual tools don’t reflect the cognitive work of radiologists, and the plethora of one-trick AI tools could result in “an IT soup.”
- ChatGPT-like “Foundation models” that are trained on broad data sets of images, text, and other information can offer more capabilities than supervised learning, in which experts might analyze chest x-rays and label them as “pneumonia” or “not pneumonia” to train the system.
- Few institutions recognize that diagnostic AI models have usually been validated against the same data set that they were trained on, which in the absence of external validation makes it impossible to see if they are actually beneficial.
I hadn’t tried Google’s Bard lately, but enhancements that were release a few weeks ago give it some advantages over ChatGPT:
- It can access up-to-date information, such as news headlines and stock market information.
- It integrates with Gmail, Docs, and Drive to find or summarize content.
- Extensions are supported for Google-owned content, such as Maps and YouTube, and can be invoked specifically by using a keyword such as as @YouTube.
- Bard’s initial responses can be double-checked via a Google search by clicking a button.
- It can include images in its responses.
I also experimented with Pi, which terms itself as a “personal AI assistant” that emphasizes friendly, personal, and supportive responses. It serves as a polite sounding board, more of an AI companion than a brute force data retrieval engine, and it can conduct natural-sounding conversations. Developer Inflection AI recently raised $1.3 billion at a $4 billion valuation.
I’m also experimenting with Summarize.tech, which summarizes a YouTube video given its link. I gave the link for a recent VA EHR hearing and it did a pretty good job, complete with links to jump to individual video sections.
Contacts
Mr. H, Lorre, Jenn, Dr. Jayne.
Get HIStalk updates.
Send news or rumors.
Contact us.
I asked ChatGPT4 Plus to “describe differences in skin thickness, lung capacity, and kidney function for a black man,” and it began by noting that individual differences exist regardless of ethnicity or skin color, but then went on to list the differences. This is despite the fact that I have well written custom instructions that state I’m a professional and that the output should always be provided as if it, and I, are experts in our respective fields
Gartner loses all credibility and objectivity when it showcases narcissists. Yeah me! Me,myself & I!