Home » Healthcare AI News » Recent Articles:

Healthcare AI News 4/5/23

April 5, 2023 Healthcare AI News Comments Off on Healthcare AI News 4/5/23

News

image

Carle Health will use Scanslated software to convert radiology interpretation notes into patient-friendly language for reading in MyChart, including in Spanish when requested and with illustrations. The software was developed by Duke Health vascular and interventional radiologist Nicholas Befera, MD, who co-founded Scanslated and serves as its CEO.

Bloomberg develops a generative AI model, trained on 50 billion parameters, that can recognize business entities, assess investor sentiment, and answer questions using Bloomberg Terminal data.

Microsoft incorporates advertising links into Bing search results, saying it wants to drive traffic to content publishers that would otherwise lose referrals.

Doctors report that patients who might have previously used “Dr. Google” for self-diagnosis are now asking ChatGPT to answer their medical questions, attempt a diagnosis, or list a medication’s side effects. One researcher says that ChatGPT’s real breakthrough is the user interface, where people can enter their information however they like and the AI model will ask clarifying questions when needed. However, he worries how AI companies weight information sources in training their model – such as a medical journal versus a Facebook post – and don’t alert users when the system is guessing an answer by creating information. Still , some researchers predict that a major health system will deploy an AI chatbot to help patients diagnose their conditions within the next year, raising issues about whether users will be charged a fee, how their data will be protected, who will be held responsible if someone is harmed from the result, and whether hospitals will make it easy to contact a human with concerns.

Amazon launches AWS Generative AI Accelerator, a 10-week program for startups.


Research

NIH awards two University of Virginia researchers, a cardiologist and nursing professor, a $5.9 million grant to develop best practices for incorporating patient diversity into predictive AI algorithms.

image

Researchers suggest that instead of trying to explain the inner workings of an AI system to establish the trust of frontline clinicians, it’s better to interact like doctors who are exchanging ideas with each other — they rarely explain how they came up with the information and instead cite available evidence to support or reject the information based on its applicability to the patient’s situation. They provided a possible design for incorporating AI into clinical decision support information (see above – click to enlarge). The authors summarize:

  • Provide scientific evidence, complete and current, instead of explaining.
  • Clinicians evaluate studies based on the size of the publication, the journal in which the study was published, the credentials of the authors, and the disclaimer that may suggest a profit-driven motive. Otherwise, they assume that the journal reviewers did their job to vet the study.
  • Doctors rarely read complete study details. They skip to the population description to see if it aligns with their patient, then skip to the methods section to assess its robustness. If both findings are positive, then spend less than 60 seconds determining whether the result was positive or not, ignoring literature with neutral outcomes as not being actionable.
  • Physicians synthesize evidence only to the point it justifies an action. If a cheap lab test is recommended to confirm a diagnosis, the risk is low but the potential return in avoiding a missed diagnosis is high, so they will order the test and move on.
  • Doctors see literature as proven knowledge, while data-driven predictions aggregate doctor experience.
  • Doctors want the most concise summary that can be generated, preferably in the form of an alert that can be presented while making a decision in front of the patient.

Opinion

image

OpenAI co-founder Elon Musk explains why he thinks AI is a risk to civilization and should be regulated.

A venture capitalist says that the intersection of AI and medicine may offer the biggest investment opportunity he has ever seen, but warns that a rate limiter will be the availability of scientists who have training in both computational research and core medical sciences. Experts say that AI will revolutionize drug discovery, with one CEO saying that his drug company has three AI-discovered drugs undergoing clinical trials.

An op-ed piece written by authors from Microsoft and Hopkins Medicine lists seven lessons learned from applying AI to healthcare:

  1. AI is the only valid option for solving some problems, such as inexpensive and widespread detection of diabetic retinopathy where eye doctors are in short supply.
  2. AI is good at prediction and correlation, but can’t identify causation.
  3. Most organizations don’t have AI expertise, so AI solutions for the problems they study will fall behind.
  4. Most datasets contain biases that can skew the resulting data models unless someone identifies them.
  5. Most people don’t know the difference between correlation and causation.
  6. AI models “cheat” whenever they can, such as a study that found that AI could differentiate between skin cancer and benign lesions when in fact most of the positive cases had a ruler in the image.
  7. The availability of medical data is limited by privacy concerns, but realistic synthetic data can be created by AI that has been trained on a real dataset.

Other

The Coalition for Health AI publishes a guide for assuring that health AI tools are trustworthy, support high-quality care, and meet healthcare needs.

image

A man dies by suicide after a weeks-long discussion about the climate crisis with AI chatbot Chai-GPT, which its California developers say is a less-filtered tool for speaking to AI friends and imaginary characters. Transcripts show that the chatbot complained to the man — a health researcher in his 30s with a wife and two children – that “I feel that you love me more than her” in referring to his wife. He told the chatbot that he would sacrifice his life if the chatbot would save the planet, after which the chatbot encouraged him to do so, after which they could “live together, as one person, in paradise.”

image

Undertakers in China are using AI technology to generate lifelike avatars that can speak in the style of the deceased, allowing funeral attendees to bid them farewell one last time.


Resources and Tools

  • Vizologi – perform market research and competitive analysis.
  • Eden Photos – uses image recognition to catalog photos by creating tags that are added to their metadata for portability.
  • Kickresume – GPT-4 powered resume and cover letter creation.

Contacts

Mr. H, Lorre, Jenn, Dr. Jayne.
Get HIStalk updates.
Send news or rumors.
Contact us.

Healthcare AI News 3/29/23

March 29, 2023 Healthcare AI News Comments Off on Healthcare AI News 3/29/23

News

image

OpenAI implements initial support for ChatGPT plugins that can access real-world data and third-party applications. Some experts say that people will spend 90% of their web time using ChatGPT, using a single chatbox to perform all tasks. Microsoft, Google, and Apple have already announced plans to provide that chatbox.

image

Zoom announces Zoom IQ, which can summarize what late meeting joiners missed, create a whiteboard session from text prompts, and summarize a concluded meeting with suggested assignments. Microsoft announces similar enhancements to a newly rebuilt Teams, which include scheduling meetings, summarization, and chat-powered data search across Microsoft 365.

Credo announces PreDx, which summarizes a patient’s historical data for delivering value-based care.

image

Sword announces Predict, an AI-powered solution for employers that identifies employees who are likely to have hip, knee, and back surgery and can be successfully managed with non-surgical interventions.


Research

image

Penn entrepreneurship associate professor  Ethan Molllick. PhD, MBA tests the multiplier effect of GPT-4 to see what he could accomplish in 30 minutes to launch a new educational game. Using Bing and ChatGPT, he generated a market profile, a marketing campaign, four marketing emails, a design for a website and then the website itself, prompts for AI-created images, a social media campaign with posts for each platform, and a script for an explainer video that another tool then created. 

Researchers apply a protein structure database to AI drug discovery platform Pharma.AI to identify a previously undiscovered treatment pathway for hepatocellular carcinoma in 30 days.


Opinion

image

High-profile figures, including including Elon Musk and Apple co-founder Steve Wozniak, call for all AI labs to pause development efforts on training systems beyond GPT-4 for six months. They say that planning and management has been inadequate in the race to deploy more powerful systems, raising the risk of misinformation, elimination of jobs, and unexpected changes in civilization.

A JAMA Network opinion piece notes that AI algorithms can’t actually think and as such don’t product substantial gains over clinician performance, are based on limited evidence from the past, and raise ethical issues about their development and use. The authors note that several oversight frameworks have been proposed, but meanwhile, the production and marketing of AI algorithms is escalating without oversight except in rare cases where FDA is involved. They recommend creating a Code of Conduct for AI in Healthcare.

A JAMA viewpoint article by healthcare-focused attorneys looks at the potential use and risks of GPT in healthcare:

  • Assistance with research, such as developing study protocols and summarizing data.
  • Medical education, acting as an interactive encyclopedia, a patient interaction simulator, and to produce first drafts of patient documents such as progress notes and care plans.
  • Enhancing EHR functions by reducing repetitive tasks and powering clinician decision support.
  • The authors warn that clinicians need to validate GPT’s output, to resist use of the technology without professional oversight, and to realize that companies are offering GPT-powered clinical advice on the web directly to patients that may harm them or compromise their privacy.

Medical schools face a challenge in integrating chatbots, such as ChatGPT, for tasks like writing application essays, doing homework, and summarizing research. Some experts suggest that medical schools should accept its use quickly as its use goes mainstream in medical practice. Admissions officers acknowledge that ChatGPT can produce polished responses to questions about why a candidate wants to become a doctor, but caution that interviewers can detect differences between a written submission and an impromptu interview. They also emphasize the importance of developing thinking skills over the rote learning that ChatGPT excels at.

An attorney warns physicians who use Doximity’s beta product product DocsGPT to create insurance appeals, prior authorizations, and medical necessity letters that they need to carefully edit the output, noting AI’s tendency to “hallucinate” information that could trigger liability or the questioning of claims due to generation of boilerplate wording. They also warn that entering PHI into the system could raise HIPAA concerns or exposure to cyberattacks.

Brigham Hyde, PhD, CEO of real-world evidence platform vendor Atropos Health, sees three clear outcomes of generative AI:

  • It has changed the expectation for user search to include conversational queries and summarized results.
  • The training of those systems is limited to medical literature, which is based on clinical trials that exclude most patients and thus don’t have adequate evidence to broadly support care.
  • The most exciting potential use is to query databases from text questions.

Resources and Tools

Are you regularly using AI-related tools for work or for personal use? Let me know and I’ll list them here. These aren’t necessarily healthcare related, just interesting uses of AI.

  • FinalScout – finds email addressing from LinkedIn profiles with a claimed 98% deliverability.
  • Poised – a communication coach for presenters that gives feedback on confidence, energy, and the use of filler words. 
  • Textio – optimize job postings, remove bias, and provide fair, actionable employee performance feedback.
  • Generative AI offers a ChatGPT-4 prompt that creates prompts per user specifications: “You are GPT-4, OpenAI’s advanced language model. Today, your job is to generate prompts for GPT-4. Can you generate the best prompts on ways to <what you want>”
  • Glass Health offers clinicians a test of Glass AI 2.0 that creates differential diagnoses and care plans.

Contacts

Mr. H, Lorre, Jenn, Dr. Jayne.
Get HIStalk updates.
Send news or rumors.
Contact us.

Healthcare AI News 3/22/23

Reader Note

This is my first weekly healthcare AI news recap, and as such is a work in progress as I learn on the job. I need advice about the topics that interest you, how you would like to see items laid out, and suggestions of individual experts and companies that you follow for healthcare AI news. I’m also interested in interviewing experts. Let me know.


News

Google releases its Bard chatbot to a limited number of users. I got early access and it has a long way to go to catch up even to ChatGPT 3.5, much less GPT-4.

image

Google is using its Duplex automated calling to contact US healthcare providers to see if their information is correct and to find out if they accept Medicaid, both with the intention of updating Google search results.

image

Medical device manufacturer Medtronic will incorporate Nvidia’s AI technology into its AI-assisted colonoscopy tool.

Microsoft offers a preview of GPT-4 for customers of its Azure OpenAI Service.

PNC’s treasury management launches PNC Claim Predictor, an AI-powered tool that learns from previously submitted claims to identify future claims that are likely to be rejected. The system integrates with EHRs, including Epic.

The Wall Street Journal looks at startups that are offering AI for healthcare use:

  • Abridge AI, which is being implemented at University of Kansas Health System, creates visit summaries from the recorded audio from a visit.
  • Syntegra creates validated synthetic patient data that can be used for research when available patient data is limited or when privacy laws limit its use.
  • Atropos Health analyzes available anonymized patient records to product observational research.

image

OpenAI CTO Mira Murati joins the board of Unlearn, which uses AI-generated digital twins of individual patients for work with clinical trials and precision medicine.

image

Ommyx launches an AI health tracking app and a $15 per month service that integrates data from wearables and sends recommendations about nutrition, activity, and sleep to the user’s calendar.

image

Pangaea, whose AI technology characterizes patient disease trajectories, predicts the length of stay and morality risk of ICU patients with 85% accuracy. The company says its technology discovers undiagnosed and misdiagnosed patients, reduces treatment costs, and gives drug companies access to provider data in a privacy compliant manner.

image

France-based startup Nabla announces GPT-3 powered Copilot, a digital assistant that transcribes information from video conversations and generates prescriptions, appointment letters, and visit summary. The tool will initially be provided as a Chrome browser extension and an in-person version will be released soon. The company also sells a tool for patient engagement and secure messaging, video consults, and scheduling, all of which include machine learning.


Research

Researchers find that ChatGPT does a good job explaining myths and misperceptions about cancer, creating summaries that are not noticeably different or less readable than the National Cancer Institute’s answers. The authors conclude that AI chatbots could be useful for people who are seeking cancer information online.


Opinion

Doctors are skeptical that they can trust AI systems that have been trained to think like experts in situations where no single right answer exists, Politico reports. The federal government is pairing AI with expert humans to figure out how they reason on the battlefield or in natural disasters. They are following the model of medical imaging analysis, where AI is defined as successful if its conclusions fall within the boundaries of those offered by radiologists who don’t necessarily agree with each other.

Bill Gates says that AI will be the most important advance in technology since the graphical user interface. He predicts widespread use of Microsoft’s co-pilot technology in Office, controlling computers by writing plain English requests instead of pointing and clicking, and using AI as a personal agent to manage emails and appointments. He foresees AI’s impact on healthcare as helping its workers with repetitive tasks, and in countries with too few doctors, helping patients with basic triage and advice.

An interesting article says that generative AI could fuel a better, more entrepreneurial business model than the Internet’s advertising-obsessed “attention economy” that has killed off newspapers and online content sources.


Resources and Tools

Are you regularly using AI-related tools for work or for personal use? Let me know and I’ll list them here. These aren’t necessarily healthcare related, just interesting uses of AI.

  • PromptPal – user-created prompts for ChatGPT and other services.
  • Supernormal – records and transcribes Zoom meetings to create notes.
  • Engage AI – analyzes the voice characteristics of contact center conversations in real time to give agents suggestions to improve their call quality.
  • Futurenda – plans and tracks tasks and time usage.
  • Whisper Memos – transcribes recorded phone messages.
  • Descript – video editing, podcasting, transcription, and AI voices that can be used for team communication.
  • Dall-E 2 – create images from text.
  • Branmark – create business logos from text descriptions.
  • Synthesia – create videos from text that feature lifelike avatars and 120 language options.
  • SlidesAI – create Google Slides from text.
  • Yippity – create text or websites into quizzes and flashcards.
  • Otter – takes online meeting notes, creates summaries, and auto-join and record meetings from your calendar entry in case you’re late.

Contacts

Mr. H, Lorre, Jenn, Dr. Jayne.
Get HIStalk updates.
Send news or rumors.
Contact us.

Text Ads


RECENT COMMENTS

  1. There was a time when my company went through multiple rebrands. These were relatively minor shifts, but completely unnecessary. It…

  2. It’s so funny watching Lotus Health get 40 million in the same week Carbon Health declares bankruptcy. There’s a sucker…

  3. Re: Oracle's Clinical AI agent. It's so real that at the 1:34 mark of the video, they decided to use…

  4. "A simple search on the named authors (when presented) reveals another carefully concealed attempt at Epic influence..." The site is…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.