Curbside Consult with Dr. Jayne 12/17/18
I recently had an invitation to attend a webinar on artificial intelligence in medical imaging. There was a recent article on the same topic that’s still sitting in my “to skim” pile, so I thought it might be good to go ahead and take a peek.
I have to read diagnostic images as part of my day job. It’s one of the more challenging parts of my practice, primarily because I didn’t do it for a decade and was out of practice. Most of the images we encounter are x-rays we’ve ordered on our own patients, which seem easier to read because we have the whole clinical picture and know what we’re looking for on the images. For quality purposes, we also over-read the studies ordered by other physicians at different locations, which can be challenging because you don’t always have the whole clinical picture.
The most challenging images however are the CT scans. We’re not doing the primary readings on those, but due to some quality issues with our virtual radiologists, we’ve been asked to review all of our images. Given that my formal radiology training was a two-week rotation more than two decades ago, I’ve been seeking out educational resources to help improve my skills.
Still, each time I come across an image that has questionable findings — whether it’s a CT or a regular x-ray image — I can’t help but think that having some computerized support would be beneficial. Most of the articles I’ve seen on the topic are specifically directed at the incorporation of AI into radiology workflows. I haven’t found very much research looking specifically at AI within primary care radiology workflows.
In getting AI technology approved, studies look at whether the technology can identify the correct findings at least as well as the radiologists, who are usually residency trained and board certified. I’m sure there’s a preponderance of academic medical center radiologists reflected in the studies, and I would suspect that outcomes might be different in those institutions where radiology is highly specialized compared to community hospitals, where radiologists may be more generalists. There may be even different outcomes in accuracy of readings when you throw emergency physicians, internists, pediatricians, and family physicians into the mix as they read films in their offices and various outpatient settings.
Several of the potential solutions being evaluated in radiology involve support prioritizing the radiologist’s work list. Some algorithms analyze screening tests where the majority of studies are negative and highlight those images where an abnormality may be present. This is being done for studies like mammograms, where imaging technology is moving from 2D to 3D images, creating additional image volume and requiring additional time to read each study. The goal is to prioritize those that are the most high-risk so that they are addressed quickly and carefully. Other solutions are looking at areas where an abnormal study poses a high risk, such as post-trauma head CT scans.
Even though they’re not studying readings by primary care providers, there’s some exciting work being done with chest x-rays. One effort looked at 1.2 million images working to create an algorithm that would assist in “low-resource settings” without radiologists, which would certainly apply to my practice. Once the system was trained to identify specific findings — such as heart enlargement, calcification, presence of fluid, and opacity — it was tested against a panel of radiologists looking at a set of 2,000 x-rays. The system reliably identified the findings roughly 90 percent of the time. I wonder how it would score against non-radiologists looking at the same images.
There are particular types of x-rays that I still struggle with, because they can be difficult to read just because of the body part you’re looking at. Rib x-rays are an example and are challenging because the ribs sit on top of dense parts of the body (the heart, the spine, and major blood vessels) and because they curve and angle, which causes overlap when you’re trying to figure out what you’re looking at. They’re also tricky when you’re dealing with larger patients, who have more tissue for the radiation to penetrate.
I had a patient with some trauma who came in sounding like he had a broken rib. Normally, I’d prefer to order a CT scan because it gives you much better pictures of ribs without overlap. However, I was working at one of our outlying locations that doesn’t have CT, so I went with the plain film. There were indeed some rib fractures. I identified what I thought were two separate issues, but my partner doing the over-read didn’t agree — she thought there was only one. Regardless, the one looked strange enough that I felt a CT was indicated to fully define what was going on and transferred him for the study.
Within 20 minutes we had a radiologist on the phone telling us he had three fractures and also a collapsed lung, which neither of the initial reading physicians picked up on the x-ray. In hindsight you can see it, but it’s a really subtle finding and the border of the lung overlaps with the edge of a rib, right at the top of the chest where there’s a lot going on in the film. It’s likely that both of us were focused on the indication of “rule out rib fracture” and even though we did assess for lung issues, we didn’t see it.
That’s the problem with human brains and how we process information. We’re constantly prioritizing what we’re working on and rapid switching is a factor when we’re addressing multiple tasks (I had eight assigned patients I was covering at the time I was looking at the films). As a physician, you feel terrible when something like this happens, but it does happen.
I’m grateful that the only issue here was a brief delay in diagnosis. The patient’s condition had not deteriorated in the time it took to get the CT can and he had normal vital signs and oxygenation the entire time we were evaluating him. The biggest challenge I had was finding a hospital to accept his transfer, since his preferred hospital suggested that I send him elsewhere because “our folks don’t like to take care of that.” Not exactly a ringing endorsement, but the closest Level 1 Trauma Center was more than happy to accept him.
I look forward to the day when I have some AI helping me out in the trenches. Hopefully we’ll get to that point before it’s time for me to retire.
What do you think of AI in diagnostic imaging? Leave a comment or email me.
Email Dr. Jayne.
Oracle doesn't need FDA approval. Most EHRs are excluded from the definition of a medical device by the 21st Century…