Home » Readers Write » Currently Reading:

Readers Write: Deep Neural Networks: The Black Box That’s Changing Healthcare Decision Support

April 26, 2017 Readers Write 1 Comment

Deep Neural Networks: The Black Box That’s Changing Healthcare Decision Support
By Joe Petro

image

Joe Petro is SVP of research and development with Nuance Communications.

Don’t look now, but artificial intelligence (AI) is quietly transforming healthcare decision-making. From improving the accuracy and quality of clinical documentation to helping radiologists find the needle in the imaging haystack, AI is freeing clinicians to focus more of their brain cycles on delivering effective patient care. Many experts believe that the application of AI and machine learning to healthcare is reaching a crucial tipping point, thanks to the impact of deep neural networks (DNN).

What is a Neural Network?

Neural networks are designed to work in much the same way the human brain works. An array of simple algorithmic nodes—like the neurons in a human brain—analyze snippets of information and make connections, assembling complex data puzzles to arrive at an answer.

The “deep” part refers to the way deep neural networks are organized in many layers, with the intermediate (or “hidden”) layers focused on identifying elemental pieces (or “features”) of the puzzle and then passing what they have learned to deeper layers in the network to develop a more complete understanding of the input, which ultimately produces a valid answer. For example, a diagnostic image is submitted to the network and the output may be a prioritized worklist and the identification of a possible anomaly.

Like us humans, the network is not born with any real knowledge of a problem or a solution; it must be trained. Also known as “machine learning,” this is achieved by feeding the network large amounts of input data with known answers, effectively teaching the network how to interpret and understand various inputs or signals. Just like showing your child, “This is a car, this is a truck, this is a horse,” the network needs to be trained to interpret an input and convert it to an output.

For example, training a DNN for medical transcription might involve feeding it billions of lines of spoken narrative. The resulting textual output forms a truth set consisting of spoken words connected with transcribed text. This truth set expands over time as the DNN is subjected to more and more inputs. Over time, errors are corrected and the network’s ability to deliver the correct answer becomes more robust.

A key feature of a neural network is that when it gets something wrong, it is corrected, Just like a child, it becomes smarter over time.

The Black Box

Here’s where it gets interesting. Once the DNN has that baseline training and it begins to analyze problems correctly, its neural processes become a kind of black box. The DNN takes over the sophisticated, multi-step intelligence process and figures out how the inputs are connected or related to the outputs. This is a very powerful concept because we may not fully understand exactly how the network is making every little decision to arrive at an output, but we know it is getting it right.

This black box effect frees us from having to contemplate—and generate code for—all the complex intermediate variables and countless analytical steps required to get to a result. Instead, the DNN figures out all intermediate steps within the network, freeing the technologist from having to worry about every single one. And with every new problem we give it, we provide additional truth sets and the neural network gets a little bit smarter as it trains itself, just like a child learning its way in the world.

How smart is smart? One of the biggest challenges with speech recognition is accommodating language and acoustic models, the specific and very individual aspects of the way a person speaks—including accent, dialects, and personal speech anomalies. Traditionally, this has required creating many different language and acoustic models to cover a diverse range of speakers to ensure accurate speech recognition and improve the user experience across a large population of speakers.

When we started using special purpose neural networks for speech recognition, we discovered something surprising. We didn’t need as many models as before. A single neural network proved robust enough to handle a wider range of speech patterns. The network essentially leveraged what it learned from the massive amounts to speech data we used as a training set to improve its accuracy and understand people across the entire speaker population, reducing the word error rate by nearly 30 percent.

Anecdotally, I’ve heard from people seated across from a physician dictating with such a thick accent at such high speed that they could not comprehend what was said, yet DNN-driven speech recognition technology understood and got it right the first time.

It’s important to note that neural networks are not magic. DNNs require problems that have clear answers. If a team of trained humans agrees with no ambiguity and they can repeat the agreement across a large set of inputs, this is the kind of problem that neural nets may help to solve. However, if the truth set has grey areas or ambiguity, the DNN will struggle to produce consistent results. The problems we choose and the availability of strong training data is key to the successful applications of this technology.

Putting DNNs to Work in Healthcare

So how are DNNs changing the way healthcare is practiced? Neural networks have been used in advanced speech recognition technology for years, and that’s just the beginning. The potential applications are nearly endless, but let’s look at two: clinical documentation improvement (CDI) and diagnostic image detection.

Clinical documentation includes a wide range of inputs, from speech-generated or typed physician notes to labs, medications, and other patient data. Traditionally, CDI involves having people who are domain experts reviewing the documentation to ensure an accurate representation of a patient’s condition and diagnosis. This second set of eyes helps ensure patients receive the appropriate treatment and that conditions are properly coded so the hospital receives appropriate reimbursement. The CDI process requires time and resources and can be disruptive to physicians’ workflow since the questions coming from CDI specialists are generally asynchronous with the documentation input.

Technology is used to augment the CDI process. Applications exist that capture and digitize CDI processes and domain expertise, creating a CDI knowledge base at the core. This involves processing clinical documentation, applying natural language processing (NLP) technology to extract key facts and evidence, and then running these artifacts through the knowledge base. The output of this complicated process is a context-specific query that fires for the physician in real time as she is entering patient documentation, linking, say, a relevant lab value with key facts and evidence from the case to indicate the possibility of an undocumented infection, for example. This approach to addressing a common documentation gap is a technically arduous and complex processing task.

What if we applied neural networks to change the paradigm? Many institutions have been doing CDI manually for years and we can leverage not only the existing clinical documentation (the input), but also the queries generated (the output) from those physician notes to create a truth set for training the neural network with a repeatable, deterministic process. The application of neural networks allows us to skip over complexity of digitizing domain expertise and processing the inputs through a multi-step process. Remember the black box concept? The DNN essentially determines the intermediate steps, based on what it learned from the historical truth set. In the end, this helps improve documentation by having AI figure out the missing pieces or connections to advise physicians in real time while they’re still charting.

The applications of neural networks are not limited to speech or language processing. DNNs are also changing the game for evaluating visual data, including radiological images. Reading the subtle variations in signal strength associated with identification of an anomaly requires a highly-trained eye in a given specialty. With neural networks, we can leverage this deep experience by training the network with thousands of radiological images with known diagnoses. This enables the network to detect the subtle differences between a positive finding and a negative finding. The more images we feed through it, the more experienced and accurate the DNN becomes. This technology will streamline the busy workflow of the radiologist and truly amplify their knowledge and productivity.

Augmenting, Not Replacing

While the possibilities for neural networks are incredibly exciting, it’s important to note that they should be viewed as powerful tools for augmenting human expertise rather than replacing it. In the case of diagnostic image detection, for example, a DNN can serve as a first line review of films, helping prioritize them so radiologists focus first on those that are most critical. Or it might serve as an automated second opinion, possibly spotting something that might have been overlooked.

Today, AI in healthcare decision support is still in its infancy. But with the exciting possibilities created by DNNs, that infant is poised to transition from crawling to walking and even running in the foreseeable future. That’s good news for providers and patients alike.



HIStalk Featured Sponsors

     

Currently there is "1 comment" on this Article:

  1. Joe,

    I really enjoyed your article! I’m curious — do you have thoughts on how healthcare views having a “black box” in the decision making process from a legal perspective and whether adding a “black box” to the clinical process is an acceptable way to fill gaps in knowledge or skill?

    Best regards,
    Chuck







Text Ads


RECENT COMMENTS

  1. RE: Change HC/RansomHub, now that the data is for sale, what is the federal govt. or DOD doing to protect…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.