Home » Dr. Jayne » Currently Reading:

Curbside Consult with Dr. Jayne 9/30/19

September 30, 2019 Dr. Jayne 5 Comments

I recently wrote about Nuance and their efforts to create the exam room of the future, where charting is performed in real time as speech occurs. Several readers reached out with some detailed questions and discussion about the technology, which spurred me to dig a little deeper.

One reader commented about the concept of meaning as it relates to voice recognition technology and the need for systems to use pattern matching to correctly identify the content of the speech. I have a tremendous time getting my phone to recognize the difference between “pictures” and “pitchers” no matter how clearly I try to articulate, and regardless of context. Getting a system to recognize words when you’re actually trying is one thing, and having them accurately identify speech in an exam room conversation that is all over the place is another.

An article in the Journal of the American Medical Informatics Association looked at the difficulty in detecting conversation topics during primary care office visits. They used transcripts of the visits to look at whether machine learning methods could be effective in automating annotation of visits. The authors recognized the complexity of the average primary care office visit, noting:

Patients present multiple issues during an office visit requiring clinicians to divide time and effort during a visit to address competing demands, such as a patient could be concerned about blood pressure, knee pain, and blurry vision in a single appointment. Moreover, visit content does not solely focus on biomedical issues, but also on psychosocial matters, personal habits, mental health, patient-physician relationship, and small talk.

When looking at the content of visits to determine what material was covered, research raters can label each so-called “talk-turn” using codes intended to capture the visit content. This process can take several hours per visit, making it difficult to scale such an analysis. Being able to automate the extraction of these topics could not only help reduce documentation burden, but could also help identify providers who may not be following up on all the clinically relevant parts of the encounter. The authors wanted to build on previous studies that looked at human-labeled interactions and showed that machine learning systems can create annotations of those conversations.

Using 279 primary care office visits, they found that different models performed better at the visit level vs. the topic level, concluding that there needs to be additional study and larger datasets available to achieve performance that would succeed in the real-world exam room. It doesn’t seem as easy to move from the realm of natural language processing generation of discrete data as people might think. I’ve often thought about what it would be like if you could just record an office visit (both audio and video) as documentation. The pain would be in reviewing it later, unless there was a way to transcribe the information or make it searchable. Various vendors have tried to solve this problem, including leveraging Google Glass to do so.

Remember Google Glass, the tech industry’s darling way back in 2013? It’s been hiding in plain sight, as an “Enterprise Edition” that’s being used in a variety of manufacturing and heavy industrial applications as well as in healthcare. A quick scan of the website shows several big-name healthcare organizations on the client roster.

I recently had a chance to catch up with Ian Shakil, founding chairman of Augmedix, whose client roster shares some of the big names listed by Glass. He confirmed that Glass is far from gone, with around 30% of Augmedix customers using it as part of tech-enabled scribing services. The remaining clients use smartphones, which might be worn or on a stand in the exam room. It sounds like patients have gotten over the concerns that many of us initially had with Glass and privacy – he cites a 98% acceptance rate by patients, which is partly accomplished by education by the front desk or clinical staff.

It was interesting to talk to someone knowledgeable about a segment of the healthcare industry that I admit I know little about. Other than some excitement around Glass half a decade ago, and some acquisitions of scribe and transcription services by other vendors in the voice recognition and EHR spaces, I hadn’t seen a lot of coverage. We spend some time talking about the way various solutions tackle the problem, from what can be described as “dictation in disguise” to human scribes to remote scribes to attempts to use voice recognition and virtual assistant technology to create a true AI-powered scribe. Some vendors like Augmedix even offer services across the continuum, depending on where their clients are, from a human virtual scribe all the way to tech-augmented scribes who use a variety of tools to enhance their abilities to document visits.

I was surprised to learn that there is variability in what is done with the recordings of patient visits created during the course of visits. Depending on the vendor and the client, some want the recordings and video destroyed and others want it preserved. It may be used for training, quality assurance activities, or even in the future as a multimedia note or for access by the patient as a reminder of the visit. Given the plaintiff’s attorney whose branch is close to mine on the family tree, I wondered about the use of the video feeds in potential litigation. I’ve pored through enough bulky, EHR-generated medical records to know that it certainly would be easier to watch the movie than to read the book in this case.

I use a human scribe in the exam room about half of the time. Our office fully agrees with industry data that shows that such support leads to better notes, timelier patient care, and reduced clinician burnout. The biggest struggle I have though is going back and forth between having a scribe with me or not having one. When I have that support, everything I say is taken down or acted upon in the exam room before we leave, and I can just close that visit in my mind and move to the next exam room. The scribes watch for lab results or radiology tests to return and make sure I don’t miss going back to take care of a patient who is still pending disposition.

When I work a shift without a scribe, I’m pretty good at the follow up piece, but I sometimes forget to put in my orders or flag patients for discharge. I’m just so used to saying, “We’re going to do a flu swab and get a chest x-ray” and having those orders placed, my brain is on autopilot right past the need to enter them myself. It’s enough of an issue that I usually tell the rest of my clinical team that “I had a scribe yesterday and don’t today, so if you see me missing orders or discharges, just grab me” and they usually laugh, because apparently I’m not the only physician who does it.

Shakil shared a great piece with me that ran in The Lancet a couple of weeks ago, one where the author discusses “Empathy in the age of the electronic medical record.” It’s worth a read for folks who might wonder what physicians who struggle with the EHR are thinking as they try to see patients. I’m interested to hear what readers think on the topic. Where are we, and where are we headed? In the meantime, I’m mentally prepping because tomorrow’s schedule does not include a scribe.

What do you think about virtual scribes, natural language processing, and the exam room of the future? Leave a comment or email me.

button

Email Dr. Jayne.



HIStalk Featured Sponsors

     

Currently there are "5 comments" on this Article:

  1. One thing that I think trips people up with voice recognition is that medical records are really socio-legal records rather than strictly medical records. I think a good illustration is the fact that all of these 20 year olds have glaucoma listed in their medical record. These people do not have glaucoma, nor do their physicians really think that they have glaucoma, yet it is in their medical record. I don’t think that is necessarily a bad thing, I’m just pointing out that the information contained is not really medical information. It is a legal justification for a particular outcome through a process that our political system has decided to allow. I don’t think that we want voice recognition to record the winking between the pt and the dr or else we will cut off a lot of the slack that makes the system run.

    • Well, during Prohibition, lots of healthy Americans discovered a very serious, assuredly medical need for “medicinal alcohol”!

  2. How important is the visual component of an encounter? Simple things like a patient touching their arm when asked “where does it hurt?” to general body lanquage (a patient saying, “I’m fine” when they’re clearly not) aren’t considered in systems that just analyze the audio to attempt to create documentation.

    • I’ll provide the Augmedix point of view, which is probably relevant b/c we are by far the biggest in the space and we offer our tech-enabled service on both smartphone and Glass. There are pros and cons between the two options.

      It seems that a healthy chunk of doctors perceives video as crucial. It helps the tech-enabled scribe team know – who’s talking – left knee vs right know – is that an affirmative head nod – is the doctor in the room yet?

      A healthy chunk prefers it off. Their workflows permit it, and they tend to verbalize exceptional situations where context is needed.

      The breakdown is close to 50/50.

      Of course, if the video is on, we usually turn it off during moments of sensitivity / nudity / anxiety. During video-off modes, the device light state changes, and the patient can see that change. This is very crucial.

  3. Hi,

    I am working as a virtual scribe with one of the vendors of Augmedix for the past 3 years and would like to throw some light on the current topic.

    In the current status, creating a “smart examination room” would not be feasible for both the providers and patient’s. The automated softwares implementing natural language processing are yet not developed or smart enough to capture the actual visits. It can just act like voice recorder that can record the visit with later hassle of finding meaning of it.

    With the patient’s perspective, there are lot of times when they come up with colloquial statements, irrelevant statements, and unclear statements. At that time it would be the responsibility of the scribe to find a meaning out of it and prepare a chart accordingly. It would be hard for a NLP automated machine to comprehend such statements.

    With the provider’s perspective, they would need to give an ample amount of time in reviewing the charts and rectifying them. Although AI implemented scribes would be smart enough than traditional long age transcription services, but at this point of time would not be as efficient as virtual scribes.

    Yes its true that Artificial intelligence is the future in every industry, but we should be a bit conscious in implementing them in healthcare industry specially at the area where patient care is concerned.







Text Ads


RECENT COMMENTS

  1. I think Disingenuous is confused (or simply not aware of how it has been architected). How control of Epic is…

  2. It seems that every innovation in the past 50 years has claimed that it would save money and lives. There…

  3. Well, this is predicting the future, and my crystal ball is cloudy and cracked. But my basic thesis about Meditech?…

  4. RE Judy Faulkner's foundation wishes: Different area, but read up on the Barnes Foundation to see how things work out…

  5. Meditech certainly benefited from Cerner and Allscripts stumbles and before that the failures of ECW and Athena’s inpatient expansions. I…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.