Satish Maripuri, MS is EVP/GM of the healthcare division of Nuance Communications of Burlington, MA.
Tell me about yourself and the company.
I came to the US in 1986 as grad student and stayed. I’ve been a Boston-based executive for quite some time. I’ve dealt with a lot of global businesses. I’ve traveled quite a bit, 7 or 8 million miles around the globe. I’ve been with Nuance for six years. I’ve essentially made my career here leading the healthcare business, of which I’m very passionate about. I’m personally driven from a mission standpoint in healthcare.
What has been the business impact of the malware-caused extended system outage?
The business impact was primarily to the production days that we missed in the transcription business. For the most part, in July and early to mid-August. That was the direct impact to our direct revenue.
From an ongoing impact, we don’t have a predicted run rate tail-off from a go-forward standpoint. From our investors’ perspective, we have this year, of which our fiscal year ended September 30. We have the fourth quarter — roughly four to six weeks’ worth, depending on which clients — of production impact downtime.
We have a few clients who have transitioned away from us during the downtime, as we had given them counsel to seek other solutions. Most of them have come back. A few have stayed with their existing temporary provider. Those we expect as clients we would lose that part of the business going into next year.
One of the things I’ve seen is that clients have been very gracious in giving us an opportunity to earn their trust back. The one thing we have focused through the entire recovery process is transparency and regaining their trust.
The downtime led me to wonder how the clinician voice dictation business is divided among front-end speech recognition, back-end speech recognition, and manual audio transcription and how that’s changed over the past few years.
We transcribe about five billion lines of transcription a year for US hospitals. That’s typically what we call the back-end transcription capabilities. The front end, which is essentially the Dragon-driven dictation, is being used by about a half a million physicians in the US.
We see usage of front-end speech continue to grow and the back-end dictation continue to erode due to phenomena that the industry is already aware of. I don’t see that trend reversing. In fact, in the last year or year and a half, we’ve seen the front-end speech capabilities accelerate in adoption. We have, in fact, been a big part of that adoption by driving front-end speech into the cloud. Dragon is now, for the most part, in the cloud. With that, our physicians get ubiquitous access no matter which setting — at home, in the car, in the clinic, and in the inpatient setting — with a single profile that is always available in the cloud.
Even through the downtime we had, it’s the transcription part of the business that was down, but the front end-speech Dragon capability was up and secure.
That’s where we see the adoption going. In the last three or four months, we’ve added something like 25,000 to 30,000 physicians moving into the cloud-based capabilities. We only see that accelerating.
Speech recognition has reached consumer appliance status, with the Amazon Echo, Apple Siri, and other products moving conversational user interfaces to the mainstream. How do you see that changing?
If you wouldn’t mind, if you would indulge me a little bit in where we see the vision and the next-generational landscape going in clinical documentation, that might just be a little bit of context that’s going to clarify a bit of your question.
We see a world where a clinician is having a conversation with a patient. The two of them conversing is automatically generating clinical documentation that is also annotated and improved on the inside. Then as it goes into the final resting place of the EMR with the capability of being transcribed, being converted to medication lists and into other lab reports, etc. We truly see an ambient clinical documentation world coming to fruition.
Intersect that today with today’s burden of the clinician. Roughly 43 percent of an average clinician’s time today is being spent in front of a computer, dictating and documenting things. You intersect these two and an ongoing increased demand in knowing multiple things to improve clinical documentation. You only see this leading to one thing, which is the more we can take off their plate and the ability to make their life a little bit easier by dealing with the patients — which is what they took the oath for — the better it is for them.
That’s essentially our mission. That’s what we at Nuance Healthcare are putting all our investments into. Every step we’ve taken in speech NLP and now the capability of conversational AI that we’re bringing to the table is geared toward that next step. We have effectively delivered a highly scalable, secure Dragon Medical speech solution. The next frontier for us in that step is to bring Dragon Medical Virtual Assistant to the table.
Our view is that the next paradigm shift in this Virtual Assistant is the ability to have navigational access and conversational interactions with the EMRs in the multi-modality setting. Then of course the question becomes, how do you actually intersect that with the actual device and the form factor? We believe that the complexity of clinical documentation use cases that are in today’s physician’s setting require a level of capabilities that require a unique device to solve that problem. Hence the prototype of an innovation that we announced around the smart speaker as well that goes along with our Virtual Assistant that addresses these needs.
You may already be aware that Nuance as a company has addressed several Virtual Assistant use cases already. Our Virtual Assistant platform is being used at Audi, American Airlines, BMW and I can keep going in both the automotive and the consumer sector. We’re bringing this capability to the clinical documentation problem, as we just announced, by taking all that Virtual Assistant capability and IP and specializing that for healthcare, just like we did for Dragon Medical in the cloud. We think that is much needed.
We’ve had a prototype out there for about two and a half, three years now. Our providers have given us really solid feedback on that. We’re now going to that next level of actually launching that, integrating with the EMRs, and taking some of the early adopters to market.
Now, your question specifically on consumer entrants. They’re not to be ignored. In healthcare in general, there are a couple of different use cases. There are patient-driven use cases and there are physician-driven use cases. Eventually those might blend, but we think that today, there’s a natural extension of the consumer devices into the patient-centric use cases. They may be in an outpatient setting or an inpatient setting, but there’s a big barrier from that point to cross over to clinical documentation and actually being the Virtual Assistant for a physician. That’s the setting in which we have years of experience and that what we are driving to at this point as well.
How will you get clinicians to try something new with the Virtual Assistant and how will you develop and maintain its EHR integration?
You touched on a couple of things that we believe are critical. In terms of the physician adoption question, we can automate tasks that are repetitive and mundane in a very conversational sense. That would be a huge win, because today they go out of their way to document by doing something unnatural — speaking into certain boxes and into certain dialogue frames to be able to capture documentation. If you can eliminate those and make those navigational style control-and-command — open Tim’s record, dictate all of these labs, all the medications, prescribe this, check that — these are all things that are natural command-and-control navigational style. That’s a huge step for us in getting that addressed for our clinicians.
Initial indications from our pioneering clients who have been at the leading edge of technology over the years is that this would go a long way if you make it natural, navigational command-and-control style. That’s what we’re shooting for.
We have to work closely with our EMR partners. Epic, Cerner, Meditech, and others have expressed varying degrees of interest over the years in trying to solve this problem. We now have the enabling technology and we need to work with them for tighter integration. I think you’ll see that every single one of them is interested in aligning and making the physician’s life better. If this leverages a Virtual Assistant that allows the physicians to make their day a bit better, EMR interests are aligned. We understand how to work with the EMR partners very well and that’s a big benefit for us.
As far as your question on adoption of physicians, they’ve been asking for something like this. Ease of use. Take the burden of click-and-dictate — I have to go through five unnatural steps before I can even dictate something into text. Once we take that out of the way, that becomes natural adoption. That’s not to say we shouldn’t go through the training. This year’s next technology razor blade — am I really ready for that? There is a little bit of that curve that happens with any new technology. But I think once the early adopters start to see this, it will be a natural next step.
The other thing is that most of the physicians have some level of consumer devices at home. The early adopters are starting to say, why can’t I do the same thing at the physician’s desk? They are already asking for this. I believe that barrier will be broken provided we make our navigational access and Virtual Assistant easy to use. That’s what we’re focused on right now.
I was with a client on Monday. They’ve even gone to the extent of looking at the overview video and saying, it would be tremendous if I could take the investment in the thousands of TVs that we’ve already put in our hospitals, and beyond just showing movies to our patients, if a physician could walk in and through a Virtual Assistant that’s connected to the TV, they could see the medical record on screen. It’s a bit of a Star Trek look and feel, but that’s not too far away by leveraging existing investments. They’re very creative about this.
It would seem a natural fit that hospitals could be your partners since they often impose the EHR burden on clinicians who are affiliated with them or employed by them, giving those hospitals a competitive advantage in encouraging adoption of a Virtual Assistant that could improve physician satisfaction and alignment.
You’ve hit the nail on the head. Often you worry about, is there a market there if you build the technology? In this case, it’s the large institutions and the hospitals that for the most part have already gotten to the point of, “How can we make this better?” The adoption of speech itself is a good indication of that. This takes it a whole other level. They’re already asking for something like this.I can probably name a dozen institutions that have actually said, “If only this had existed.”
You’re spot on. This would catch on pretty quickly if it was available with a tight integration and with the accuracy they would demand.
Five years ago, we would have been talking about speech recognition accuracy wondering if it would ever be good enough. What will change over the next five years?
For us, it’s been a continuum. You touched on a couple of aspects of that.
Going back to five years ago, we would have been talking about 95 to 98 percent speech accuracy. Would that be a reality? We’ve proven that it absolutely is possible. Now it’s leveraged in the cloud with ubiquitous access at multiple settings with different form factors at a level of accuracy that we wouldn’t have guessed five to seven years ago. That has come to fruition and we’ll continue to innovate on that. We have speaker-independent models, where training is not needed. The level of innovation and applying artificial intelligence into just the speech innovation has been tremendous and I think we’ll continue to stay ahead of that.
The next thing is, how do you make that available through an easy access Virtual Assistant capability, hardware device or not? We’ve demonstrated that in multiple areas around consumer speech and automotive. Now we are bringing that into healthcare. I see that in not more than three years, let alone five years, I’ll walk in as a patient in a physician’s office and I will see a Virtual Assistant. The physician walks in, has a conversation with me, and uses a good amount of command-and-control navigational access. With the Virtual Assistant in the room, the documentation is taking place in either a semi-automated, or for the most part, an automated clinical documentation fashion. I don’t think that’s too far away.
You’ve seen speech accuracy get to a certain level. You’ll see a level of Virtual Assistant use case become mainstream. Then the question is, what do you do to intersect clinical intelligence into that scenario and setting? By that, I mean a level of clinical decision support, a level of knowledge, a level of improving the clinical documentation that’s being captured for a couple of different purposes.
Today’s accuracy and the improvement of documentation that’s being captured is a big part of what we do for our clients. We often refer to that as clinical documentation improvement, or CDI. We’ll see more technologies that improve the accuracy of what’s being captured to accurately represent severity of illness, risk of mortality, etc. because that directly impacts the quality of documentation that eventually drives downstream reimbursement models.
I see a level of intelligence being built on top of what’s being captured. We refer to that as clinical intelligence that’s being introduced in front of the physician and the other parts of the care team, whether it be radiologists, whether it’s part of the CDS specialists, etc. It’s speech, but it’s in the cloud, it ubiquitous, with a Virtual Assistant capability on top of that, and that’s the starting point. It’s already happening today where a level of clinical intelligence is being brought to the care team, especially the physician. With artificial intelligence and the level of deep learning capabilities that are available today, we know that that’s not out of the realm of reality for us within three years. A good portion of that exists today.
Do you have any final thoughts?
Economics don’t quite scale to the level at which the healthcare spend is going. The space is ripe for disruption. I’m extremely confident that enabling technologies, whatever those might be and a few of which we’ve just covered, are going to enable that massive disruption. It’s coming and it’s actually happening. We are at a point where we, through some of our larger partners, are enabling some of that disruption. We’re very excited about that. The healthcare industry will see the benefit of a lot of that disruption coming.
On a personal note, given what we’ve gone through — both in my own personal life as well as the incident recently — I’m really proud of the teams and the way that the company and the teams handled one singular focus of customer focus and doing right by the customer with a set of core values. The operative word there is resilience. Both personally as well as from a team perspective, we are committed to driving what’s right for the clients and driving that through a level of resilience. We’ve come out stronger as a business through that whole experience while it didn’t seem like that in that six-week period.We’re really proud of that.