Home » Readers Write » Currently Reading:

Readers Write: Five Emerging Imaging AI Workflows

July 1, 2019 Readers Write No Comments

Five Emerging Imaging AI Workflows
By Stephen Fiehler

image

Stephen Fiehler is founder and CEO of Interfierce of San Francisco, CA. 

Medical imaging is one area of medicine that could significantly benefit from the implementation of artificial intelligence (AI). Applications that interpret chest x-rays, detect stroke, and identify lung cancer are already available. Many AI solutions have garnered FDA approval for commercial or clinical use.

However, few if any have mastered a “best practice” workflow that seamlessly integrates the application’s output with the hospital’s other clinical applications (i.e. PACS, EHR, dictation system). How should the application’s output be delivered? Who should see it first? The answers to these questions are dependent on the nature of the algorithm (i.e. stroke detection, chest x-ray, pediatric bone age), but five workflows are emerging for imaging AI applications.

Advanced Visualization

Many imaging AI applications are delivering their output to an interpreting radiologist within a separate application. The radiologist is commonly working out of PACS, the dictation system, and the EHR. The Advanced Visualization (or post-processing) workflow introduces an additional application to the radiologist’s workflow. Sending the study to the AI application, launching it, and running the images through the algorithm can add significant time to the interpretation process. The Advanced Visualization workflow sets a high bar for the value of the AI application’s output. If the application does not save the radiologist ample time or provide substantial value, the Advanced Visualization workflow is not viable.

Dictation System Integration

Some imaging AI applications are opting to integrate with the radiologist’s dictation system (i.e. Nuance PowerScribe 360). If an AI application has a discrete output that is independent of the images, it can send that value to the dictation system via Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR). DICOM is the standard way of exchanging images and image related data in healthcare, and DICOM SR is discrete data associated with the imaging (i.e. left ventricle dimension in centimeters).

An example use case is an AI application that analyzes pediatric hand x-rays to determine the patient’s skeletal age can leverage DICOM SR to send its output to the radiologist’s report. The patient’s “Z-score” is conveniently embedded in the radiologist’s report as soon as she opens the study. She can then confirm the value or edit it before finalizing the result. Dictation system integration adds no time to the radiologist’s interpretation process.

PACS Integration

Computer aided detection (CAD) applications have been integrating with PACS for over a decade. CAD applications are designed to annotate images to improve the detection of disease, like breast cancer, and reduce false negative rates. These applications commonly integrate with PACS via DICOM secondary capture (SC), which adds additional annotated images to the study in PACS. Some AI applications use this same type of integration to send annotated images back to PACS to assist with the radiologist’s interpretation. DICOM SC requires the radiologist to navigate to the annotated images within the study, which can be cumbersome depending on the size of the study.

Worklist Prioritization

A popular type of AI integration is worklist prioritization. Many AI applications integrate with a reading worklist to prioritize studies that present signs of time-critical conditions, like stroke, spinal fractures, or pulmonary embolism. Rather than producing a complicated output like annotated imaging or DICOM SR, worklist prioritization simply elevates the priority of the study or flags it as a particular abnormality. This can help radiologists identify time critical studies more quickly in an effort to expedite patient care.

EHR Integration

To my knowledge, no imaging AI applications are sending results directly to the EHR. Yet direct-to-EHR may become the best practice workflow in the future for mature imaging AI applications.

Sending the output of the AI application directly to the patient’s chart in the EHR has many advantages and risks. The information would be immediately visible by other care team members who have the security to view preliminary results. Therefore, the report should adequately warn the viewing user that “THIS IS A PRELMINARY RESULT” and it has not yet been reviewed by a radiologist.

Careful consideration and planning should take place before implementing direct-to-EHR integration, but as AI applications mature in competency, it will become more common. Many hospitals opt to send an EKG machine’s automated interpretation directly to the EHR today. The result is clearly labeled “preliminary” and the inpatient or emergency room providers know it has not been confirmed by a cardiologist. However, the immediate availability of an imperfect result is valuable. I believe many imaging AI applications will eventually send their output directly to the EHR.



HIStalk Featured Sponsors

     







Text Ads


RECENT COMMENTS

  1. Minor - really minor - correction about the joint DoD-VA roll out of Oracle Health EHR technology last month at…

  2. RE: Change HC/RansomHub, now that the data is for sale, what is the federal govt. or DOD doing to protect…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.