Diana Nole, MBA is EVP/GM of the healthcare division of Nuance Communications of Burlington, MA.
Tell me about yourself and the company.
I run the healthcare division for Nuance. I joined the company in May, but I have known Nuance for about 15 years, which was right around the time when I started my work within healthcare, always being around technology and companies that were transforming their portfolio. This is a great opportunity. Nuance is well regarded in terms of being respected by customers. They have a large installed base, wonderful partnerships with everybody that’s in the ecosystem such as the EHRs and Microsoft, and a lot of great growth opportunities. While I wasn’t looking, it was an intriguing opportunity and time to come to the company.
Where would you place ambient clinical intelligence in your personal version of the hype cycle?
We are in the earlier stages of how it will be used within healthcare. It is focused right now on the particular area of physician burnout and patient experience.
The elements of the documentation burden that is placed on the physician is causing them to feel overwhelmed. They are not being able to produce the type of experience they want with patients, but patients are also feeling disengaged. This is a solution that is going to evolve from not just being the element to fix or to support better clinical documentation, but to expose opportunities that we haven’t appreciated or realized.
For example, wouldn’t it be helpful to have the complete diarized elements of the conversation between patient and physician for other people that are supporting the patient in their treatment plan, perhaps family members? My specific example is that my aging parents go to the doctor. It’s not always clear what the doctor has asked them to do when they get back home. As a family member, wouldn’t that be great?
We are in the very early stages of how we see the uses and the use cases of this. The technology needs to continue to mature when you think about ambient and conversational AI versus more structured use of voice recognition.
Does a model exist in other industries that healthcare will follow, where software extracts discrete data elements from a conversation between a professional and their client?
Nuance used to be directly involved with conversational AI that took place in an automobile. It had to distinguish between conversations that were going on in a car and the aspects of what should be done with that communication. We have leveraged that a lot within our own organization. We are starting to see other interesting use cases. We also participate in law enforcement, where we can capture the conversation that’s going on and understand where that might be applicable.
How do you see that diarized speech of basically the full transcription of the patient encounter being used?
We are intrigued to see where that area evolves. As I mentioned, there’s definitely a use case where the patient may want to be able to provide that to other people who are of interest. There has always been a bit of a worry about whether that will open up even more concern about what is said and whether that will tamper the conversation and constrict it. There are elements of compliance and concerns about what gets said.
It’s not necessarily the direct clinical elements of the conversation, but maybe more of the conversation that is not directly related to the medical outcome for the patient or the treatment planning. There will be people who will be concerned that, “If I didn’t say this, will it come back to haunt me?” but I believe that we are at a point where the benefits can outweigh those risks.
The industry at large is being more open minded. In our first use cases since the product became commercially available 10 months ago, there is definitely an appreciation, even by the physician, that this is beneficial for them as well. They can’t always remember everything that is communicated, so if they aren’t transcribing or doing something in the course of the actual patient visit, could they have missed something? It is beneficial even for their own purposes to remind them of what was discussed and said. Those users have said that it is helping them to improve the quality of the documentation that’s provided.
We are going to see where it evolves, but I’m definitely pleased that people on both sides, patient and physicians, seem to be open minded about the benefits of the full diarization of the conversation.
What are customers doing with Dragon Ambient EXperience, or DAX?
They have fully deployed it. We have evolution of the maturity levels in particular specialties. Orthopedics is probably the most advanced, but they are fully deploying it.
What’s been interesting in the COVID era is that they also have been deploying it, in many cases, in a telehealth environment in addition to an office environment. Some of them use the mobile app, while some of them actually use the office device that we have. They have typically rolled it out to anywhere between 15 to 25 doctors. They see the process and change management that is associated with it, which is very limited in terms of burden to them. They are up and running right away.
Then we are already into the elements expansion and going into maybe more orthopedics in a particular location, going to the entire department, or they’ll go into the other specialties as we’ve been maturing them. It is an element that continues to include a quality review process as part of that, as that helps the ongoing algorithm in AI and the neural nets that … I can’t describe it to a deep degree, but all of that is continuing to be fed back and making the process more and more accurate. So it’s gone quite well.
What preliminary results have clients seen with regard to physician burnout?
We do data analytics around turnaround times and patient satisfaction. Before DAX, roughly 72% of physicians were feeling burnout and fatigue. After DAX, that was reduced to around 17%. We get quotes that just the thought of taking DAX away is stressful or would make them want to quit.
We are definitely seeing reduction on the physician burnout side and the benefits we offer, but patients are also describing more engagement from the physician. They feel more attended to and feel that they are being listened to. We have also seen patient wait times down, maybe about 10 minutes, which is almost 50% reduction in wait time, so it’s also an element of either being able to have the opportunity to see more patients if a physician wants to do that or to utilizing the time to feel less overwhelmed from an administrative perspective. Early feedback has been quite positive.
I assume the patient’s perception is due to the clinician paying attention and looking at them instead of typing while they are talking.
That’s exactly right. It is an element of feeling like I was listened to — you weren’t distracted. There was feedback that almost all patients are saying that the physician spent less time focusing on the computer. Very high percentages, 90%, said their visits felt more like a personable conversation. The patient elements are also very satisfactory for the physician.
Technology can now make talking to a machine seem like talking to a human, and people are comfortable interacting with virtual assistants in ways that can border on the scary. Does that capability provide new healthcare use cases?
There are a few different cases, so you are exactly right. One of the exciting things that I learned when I was going through my interview process was the opportunity within Nuance to focus on intelligent engagement, as they referred to it. We use that a lot on our enterprise side, but we recently have launched it under the umbrella of patient engagement solutions within healthcare. We have some early wins in terms of customers that we will be announcing soon.
We are focused on exactly that. Customers have reached out, in particular COVID providers, and said, “We are completely overwhelmed with calls coming in with patients wanting to understand their options. ‘Can you just remind me what am I supposed to be doing? If come into the office, where am I supposed to go?’” These basic things potentially restrict a patient from following up for treatment and getting things done if they can’t find easy access to the information that they’re looking for. We are excited to be taking this technology from the enterprise side and doing more with it in intelligent engagement.
People are also thinking about how to use DAX, the ambient clinical intelligence solution, for example, an inpatient hospital room. You could have more interaction and diarize that with multiple providers within a patient room, where the patient could interact with it. They are also asking if it could be viewed as being an ambient opportunity for check-in, where you don’t need so much human-to-human contact and could check in via the ambient device in a particular check-in room.
I don’t know how many of these things will immediately stick, but it’s interesting that people are thinking about where else it can be applied.
Speech recognition is now ubiquitous, accurate, cloud-based, and accepted by consumers. How does that support using it new ways?
We’ve talked about speech, particularly on the healthcare side with the physicians. We’ve also been working on solutions for other parts of the care teams, such as nurses. In many cases, nurses provide the same kinds of things, but in different ways and in a different structure. We have talked about the patients and the intelligent engagement. It’s an element of the environment. What is the setting? DAX has initially been rolled out as an office visit type of setting, where there is a tremendous amount of clinical documentation burden. But obviously the interest would be how to do more of that in the hospital inpatient setting or in other types of clinical settings. People have also asked if it will be more interactive in areas such as mental health.
It will evolve. I don’t want to get over our skis a little too much here, because there certainly is a lot that goes in just with the initial use cases. But certainly as you said, people are now saying, OK, it’s not just hype. It really does work and it is going to evolve. There are opportunities to deploy it into these various use cases, which I’m excited about. Especially in a COVID year, to see the ongoing investment in evolution of has been motivating for me and certainly for our team.
Do you have to evangelize the idea of developers building software with speech recognition as the primary input mechanism instead of just bolting it to keyboard-centric applications?
There is enough evolution that has occurred on the consumer-oriented side that you have to do less. People believe that it’s there, it can happen, and it can work. There is an element of skepticism of how well it can work in a clinical documentation setting where you have to be highly accurate. Not pretty highly, but highly accurate. You’re going to use this not just for coding and reimbursement, but for the treatment of the patient. There is this element of prove it out, prove it out in all of the specialties, and prove it out beyond the structured specialties that we have initially focused on.
People ask, how well does it work in family medicine practice, where you do have such random things that you might be seeing the doctor for? I fell this weekend when I was skateboarding and broke my ankle. How does that relate to all of my past history, and how is it going to interact with all of the various elements of what the doctor needs to think about when they are prescribing treatment or patient outcomes? There is a belief that it will get there, but there is also a bit of skepticism on remembering how difficult it is for some of these use cases with particular specialties, and every patient situation is quite different.
What will be the company’s focus over the next few years?
The heavy focus is on reducing physician burnout from the specific element of clinical documentation. But then as your comments and questions have mentioned, what can you do in the course of hearing something from a conversation? What could you actually do?
For example, three to five years out, could you have the computer help the physician with reminders in the course of that conversation with the patient? Like surfacing things that it may hear that you need to be reminded of. Such as, remember for this patient in their medical history they had XYZ. And coming from a company that I just left in Wolters Kluwer, there’s a new topic in UpToDate that would be applicable for this particular conversation, would you like to look at it?
The elements of how broadly you can take the conversational AI and incorporate it with the information that’s residing either in clinical decision support tools or in the course of the actual medical record for the patient will be intriguing. Then, how you can continue to be better and better at structuring the clinical documentation so you can do more data analytics and predictive analytics and tie it into things that go as far as into the world of life sciences initiatives. It does start to open up the creative ideas of what could happen and what could be out there in the future.
Do you have any final thoughts?
Even during the challenging time that we’ve had with the pandemic, I’m optimistic about what I have seen occur during this time from our customers. They have been able to adapt to this change and take on new technologies, such as those associated with telehealth and beyond. We are going to come out of this a stronger industry.
Going to ask again about HealWell - they are on an acquisition tear and seem to be very AI-focused. Has…