Home » Interviews » Currently Reading:

HIStalk Interviews Loren Leidheiser DO, Chairman & Director, Department of Emergency Medicine, Mount Carmel St. Ann’s Hospital, Westerville, OH

July 27, 2009 Interviews 5 Comments

mtcarmelstanns

What made you decide to use speech recognition instead of the usual mouse and keyboard? 

I think speech recognition offers a lot of efficiency both financially and also in time savings. The accuracy is outstanding. It allows you to perform chart documentation and navigation through an electronic medical record much more effectively than without it. That is so much better than point and click with a mouse and a traditional keyboard.

What did you use before? 

I’m an emergency physician. We would document 100% of our charts with traditional dictation. That was a very, very costly process. It cost us probably close to half a million dollars a year for an emergency department that saw about 70,000 patient visits. 

The accuracy wasn’t all that good. Our traditional dictation would be farmed out to transcriptionists over in India. When it came back, it really needed to be cleaned up.

We went with the Allscripts emergency medicine product, which was a dynamite electronic medical record. The problem we had was that even the best-in-breed still left a lot to be desired with being able to capture the unique elements of the history in physical examination. And really, the point-and-click, drop-down menus were clunky at best in terms of telling the story. Even the navigation through the software was somewhat cumbersome.

Speech recognition was a natural solution to a lot of the shortcomings of electronic medical records and also with traditional dictation. Your startup costs are reasonable. The training time is very short. Even physicians, allied health professionals, nursing staff — the training time and complexity is so minimal that it’s certainly not a barrier. The cost savings once the initial costs are incurred — really, your investment just pays off over and over and over.

How hard was it to get Dragon to work with the Allscripts product and to get the accuracy up to par?

The Dragon product runs in the background and then it populates data elements right into the electronic medical record. I can tell you, from day one, we’ve had great success using Dragon with Allscripts.

We started back with Dragon 6.0, which was really a product that needed a lot of improvement. That improvement has been seen. In other words, right now, the 10.0 version is absolutely dynamite, for lack of a better way to put it.

Allscripts recognized how good Dragon was and actually started incorporating it with their software, making some special considerations with regard to being able to use speech recognition to navigate through their software, and actually started marketing the Allscripts product with Dragon as a bundled offering to hospitals’ emergency departments.

The onset of the roaming feature, which allows a group of people to save their voice files on a central server and then pull them into any application that you’re using in a given geographical area, has been huge. What a wonderful addition. That has worked well with the Allscripts product as well.

What would you say the main benefits have been and what were some of the drawbacks?

I think one of the main benefits is that you can tell the main story uniquely in terms of documenting a history and physical examination, review of systems, medical decision-making. All those functions that are key, absolutely essential to a physician and an allied health professional, and by that I mean a nurse practitioner or a physician’s assistant.

Dragon offers a way to do that that is so much more efficient and accurate than drop-down menus and with traditional typing. You just can’t achieve the level of accuracy by other means. So I think the cost savings is huge.

The drawback I see is that there have been criticisms about the accuracy, but as I said, what I’ve seen is that the accuracy just keeps getting better and the ability to meet the end user’s expectations has been a commitment that has been a work in process that has been achieved. I’ve used the product for many years, and I put on the headset — I’m a traditional headset user — and for me, it’s just part of the process of being a physician, just like putting a stethoscope on, a normal part of my evaluation of a patient.

I think some people have found that there have been occasional problems with recognition, but there have been problems with traditional dictation being transcribed when it came back with errors. You have to look at it and skim it to make sure it’s OK.

The speed is not a downside. The speed and accuracy actually improve as you talk faster. The recognition is actually improved when you do that. If you slow down, then there are problems.

So I wonder if some of the criticisms is that people don’t know how to use the product. In our institution, we’ve got about 25 physicians that use the product and probably about 15 or 18 mid-level providers. Part of what I do is say, "OK, let’s sit down together and let me show you how I use it." The macro feature where you can store a letter or a pre-set amount of text, then simply use a voice command to spit out, let’s say, a normal physical examination, is huge. That has been a wonderful feature as well. It’s all those little shortcuts that you can really use to improve things. 

These things are easy to use. To navigate through software is very easy. It’s very intuitive. Nuance just continues to make it better and more logical. 

What do you think benefits are, if any, to patients?

I think the benefit to the patients is that it more accurately reflects the medical encounter with the patient. I can be more efficient in my order entry in the medical record. I can do that much more quickly with Dragon. I can document more accurately the historical elements of what’s going on. In other words, tell the story better.

I can reflect what has actually happened in the emergency department by very efficiently using voice recognition to capture a decision or discussion of the risks, benefits, and alternatives with the patient. I can do it at a lower cost as a result of voice recognition compared to traditional dictation, or as a consequence of the increased cost that I incur spending 14 to 18 cents a line for traditional dictation.

Do you feel that, in all the meaningful use discussion, that the use of speech recognition is going to be a help or a hindrance?

I’m very biased on that and I’ve said this for years. When I first started using Dragon back long ago, I thought traditional dictation is going to go away. As much as I hate to see automation taking human jobs, I just don’t think we can surpass the accuracy and efficiency of voice recognition.

I think it’s only going to become more pervasive, in at least the healthcare industry, as we need to have short turnaround times on the documentation in a hospital setting. Now maybe an office setting is different, but the healthcare industry changes and evolving. Already, if you look at what’s going on in the government, we’re trying to cut costs and trying to take money out of the budget for healthcare, in Medicaid and Medicare. This is going to be yet another way we can be more efficient in how we operate.

It’s not going to be just healthcare, either. I think you’re already seeing that with the phone lines, where continued use and development of voice recognition just makes sense. I don’t think it’s going to go away, I can tell you that.

So why do you think so few hospital-based doctors use speech recognition?

You know, I wonder the same thing, because I’ve been using it for probably eight years. I think I’ve been patient with it, I believe in it, and I’ve seen it work. I see it in my own practice.

I don’t know if it’s an issue where doctors just don’t have the energy, or maybe they define themselves as needing to focus on having to diagnose appendicitis, but think they don’t have to focus on the things that are more business-related. I don’t know. I’m in Columbus Ohio, and I’ve talked actually to several other practices who had an initial bad experience with voice recognition, then abandoned the idea and never came back to it.

But I think it’s like most things that we see. With time, the technology improves, the accuracy improves, and all of a sudden you find that the product is now one that really works. And maybe it’s just that I’ve been patient and also persistent. But I also thought that it was going to allow us as a group to reduce our cost of doing business and be more efficient and that has been the case.

Frankly, I think in large part that voice recognition has allowed us to pay for electronic medical record in two and a half years, based on the cost savings that we’ve achieved by eliminating traditional dictation, because half a million dollars a year was eliminated as a result of two things: voice recognition and the electronic medical record. That just continues to accrue year after year after year.

But in terms of why other people haven’t seen the success? I don’t know. Maybe we have, where I practice, a very wonderful support system in the IT department, and a very open-minded, progressive hospital administration that says, "Hey, we have the same vision that you have, and we see that this is going to work and we appreciate the fact that you’re going down this road to develop this."

So we’ve had a lot of support. And when it came to me saying, "Hey, I’d like to upgrade Dragon to the next level," they said, "OK, here’s the money, we’ll make that happen."

Our sister group wanted to have $300 handheld microphones, with a built-in mouse and everything, whereas I was happy with a plug-in headset that cost $15. And I think I get better speech recognition than they get for the $300 handheld mic. But the fact is, we’ve had support from administration who says, "Yeah, go ahead, we’ll support both. You can use the $300 handheld mic and we’ll also pay for the $15 headset." 

Maybe it is that doctors don’t want to wear headsets. You look like air traffic control person. But you know what, if it gives me the desired results better, then I’m going to wear the headset, because it frees up my hands to use the keyboard and the mouse. You know it’s not easy.

I think we want instant gratification. We want a product that, boom, just works out the box. But the fact is that the effort and the time is not that great, and really, if they give it a little bit of time they find that this really is everything that it’s said to be.



HIStalk Featured Sponsors

     

Currently there are "5 comments" on this Article:

  1. Voice doesn’t work for most busy ED docs. It’s not that it isn’t pretty accurate, it’s just that it’s very time consuming and distracting to have to edit what you are doing…. even if it’s 99% accurate you have to wait, watch and read it…which is very distracting in the busy constantly interrupted multitasking ED environment …. to say nothing about pulling up a chair and leisurely putting on some headphones. It is true the hospital admin is saving $500K though. Once volume is above 30,000 per year in the ED, a good EDIS is necessary for patient tracking (Allscripts is certainly one of the good ones), but full use of the EDIS for doc work tasks (order entry, documentation, typing DC instructions) kills doc productivity to the tune of about 30%. There are only 2 ways to efficiently document: regular dictation and a good paper ‘chief complaint’ directed template. This is why in most cases the tracking is adopted but physician tasks continue as before. This is what people just don’t get: the patient and results tracking is great but the data entry pieces for all the commercially available electronic EDIS for docs and nurses just serve to take them away from the patients…. which is bad for EVERYTHING: patient satisfaction, patient safety, worker satisfaction and worker productivity.

  2. Hooray for this article. I started using Dragon at version 6.0 also in my 2 doctor practice to cut out transcription fees. We were paying $20 – 25K per year and were able to cut it to 0. This put more money in our pocket, AND we got transcription out to the referrring MD within 24 hours – a great referring doctor satisfier. It is high time other physicians explore this for all the right reasons. BTW – version 10.0 is faster, more accurate, and allows non-EMR users to create custom templates. Dragon is here to stay.

  3. Not sure where the hesitation is for voice dictation? So, the doctors instead of dictating vocally into a phone for a dictation analyst to decipher and type out, wait a while to get the finished report back, approve and release…can now do it in their shift & be done with it. Sorry, but there is no business model where spending $500K to go slower makes any more sense. Sure there is a slight learning curve, but just like typing…you get faster with it due to repetition and increasing advancements in the software you are dictating into. With discrete data elements and some of the macro driven notes out there, the free-text portion where you need to actually dictate is growing smaller since most of the notes will be pre-filled for you.

    Sorry, but I have seen tons of EDs incorporate this into their everyday life with doctors kicking and screaming. Go back a year later and try to take it away…and they will kill you. Not having work pile up on them is actually a good thing.

  4. The critical access point of the hospital is the ED. In fact, as much as 80% of all discharges are from the ED with a small % being admitted. ED overcrowding is reach epidemic proportions and patient wait times are continually increasing in many hospitals. LWBS metric which many CFO’s chart on a daily basis is also increasing.

    Any technology that can improve the patient flow engine while improving clinical outcomes ought to be a top priority in all ED’s.

    EDIS can carry a price tag of $100’s K but there is an alternative. There are companies that sell standalone computerized discharge instruction and prescription writing software for a fraction of the cost of a full blown EDIS.

    For less that $10k a hospital can purchase a DCI software system and begin their migration towards an eventual deployment of an EDIS and launch their enterprise EMR strategy.

    Not all clinical software systems are bank busters.

  5. Maybe these are some of the reasons why few hospital-based (and practice-based) doctors use speech recognition.

    First, there are two types of speech recognition: 1) Orders-based (for the pathologists, the radiologists, the cardiologists — with, for all intents and purposes, more limited vocabularies and dictating [via speech] ordered test result reports); and, 2) Encounters-based (for the internists, the ED physicians, etc., — with, for all intents and purposes, more broad vocabularies and dictating [via speech] H&P, consultation, discharge summary, progress note reports).

    Second, there are many speech recognition products from which to choose. However, for all intents and purposes, today there are only two, commercial, speech recognition “engines” – the Dragon and the Philips engines. (IBM used to have an engine, but that went by the wayside.) Consequently, almost all the vendors providing speech recognition “products” license / partner with either / or.

    What is interesting is that, today, Nuance IS the Dragon engine, having acquired this engine several years ago when Nuance was known as ScanSoft. When Nuance acquired Dictaphone, Dictaphone was / still is using the Dragon engine. However, when Nuance acquired eScription, eScription was / still is using a self-developed engine.

    In the case of Philips (Royal Philips Electronics Speech Recognition Systems Division), today Philips WAS the Philips engine, having developed this engine (I think) around the 1990s (around when there still was Kurzweil). When Philips acquired roughly 70% ownership in MedQuist (formerly Lanier Voice Products Division), MedQuist, for obvious reasons, used the Philips engine. However, during 2008, when Philips sold its ownership in MedQuist as well as its SR Systems Division, Nuance acquired this division and its engine!

    Now Nuance owns both engines!

    The key for potential buyers of Voice (Dictation) / Text (Transcription) and/or Speech (Recognition) (a.k.a., VTS) systems / products / components is to know which Nuance speech recognition “engine” is being used (and evaluate it carefully) AND which Nuance or other VTS vendor / “product” is being reviewed (and evaluate it carefully, too).

Text Ads


RECENT COMMENTS

  1. Giving a patient medications in the ER, having them pop positive on a test, and then withholding further medications because…

  2. Apple legacy? Seems I heard that before. Say around 1997. Jobs put out a 15 min video where a guy…

  3. Cmon, publishing and writing about an Only Fans and TikTok user is tabloid news. Its junk news, not up to…

  4. "Healthcare startup Particle Health has been battling electronic health records giant Epic Systems all year. Now, the startup just raised…

  5. I'd never heard of Healwell before and took a look over their offerings. Has anyone used the products? Beyond the…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.