Healthcare AI News 5/24/23
Microsoft will integrate ChatGPT into Windows 11, where it will run in its own Copilot window as a personal assistant to perform Windows commands and summarize documents that are dragged into it. The user rollout will start in June.
Tell, whose app allows users to seek advice from medical experts, integrates ChatGPT to translate medical jargon into accessible language.
OpenAI says that AI systems will exceed expert level in most domains within 10 years and recommends steps to mitigate its risks:
- Coordinate development efforts across countries and hold companies to a high standard of responsibility.
- Create an organization similar to the International Atomic Energy Agency provide oversight and inspection AI efforts that exceed a specific level of capability or resource requirements.
- Develop technical capabilities to make superintelligence safe.
OpenAI launches a ChatGPT app for the IPhone.
In Pakistan, the government of Punjab launches a two-hospital pilot of using AI to assist in diagnosis.
Google launches the Google for Startups Growth Academy: AI for Health program for companies based in Europe, Middle East, and Africa. Startups from seed to Series A will be offered a three-month virtual program of tailored workshops, collaboration, and mentorship.
Alicja AI offers a $500 per month enterprise clinical documentation tool that integrates with EHRs.
ChatGPT has passed several medical exams, but researchers find that it falls just short of passing the American College of Gastroenterology Self-Assessment Tests.
A University of Arizona Health Sciences-led study finds that participants are almost evenly split in preferring a human doctor versus AI for diagnosis and treatment. The authors recommend further research about how AI can be incorporated into the work of physicians and the decision-making process of patients.
Business Insider profiles ED physician and two-company VP of innovation Joshua Tamayo-Sarver, MD, PhD, who says that it “probably should be embarrassing” that has sometime uses ChatGPT to explain medical issues in patient-friendly terms. He concludes that ChatGPT is “the most brilliant, talented, often drunk intern you could imagine” that is great at explaining concepts but not good at diagnosis or other tasks that require clinical reasoning.
Kaiser Permanente ED doctor and technologist Graham Walker, MD pens an excellent piece on how he views AI as a physician:
- AI can pass a medical school exam, which involves basic multiple choice questions, but that capability is not very related to interacting with patients to determine their multiple issues and their viewpoints about options.
- Doctors know how to successfully address a patient problem up to 95% of the time due to their specialization, residency training, and repeated exposure to the same common issues, and therefore would see no value in asking a “medical bot” for recommendations.
- Where AI could help is to differentiate among possible problems that exhibit similar symptoms.
- AI might offer a convincingly objective second opinion to a patient who is told, for example, that they don’t need antibiotics for a viral infection.
- He says he would “virtually hug and kiss a digital agent” that could generate discharge instructions, describe the logic behind the chosen medical plan, and answer questions are likely to have.
- AI could help identify and correct confirmation bias, where the doctor needs fresh perspective to see that evidence might not support the suspected diagnosis.
- AI could help steer an ED patient to local sources of help that might be better than the ED.
- AI could help doctors and patients understand why lab tests may not be indicated and how to react to positive or negative results.
Mr. H, Lorre, Jenn, Dr. Jayne.
Get HIStalk updates.
Send news or rumors.
Yet you miss the critical end of that sentence ---- "..yet they have ALL the LEVERAGE IF there were any…