Home » Interviews » Currently Reading:

HIStalk Interviews David Lareau, CEO, Medicomp Systems

July 15, 2024 Interviews No Comments

David Lareau is CEO of Medicomp Systems.

image

Tell me about yourself and the company.

I’ve been the CEO of Medicomp for about 15 years. I originally came from a large network background and then started and owned a billing company. I met Peter Goltra, who founded Medicomp in 1978, in the 1990s.

We’re actually a year older than Epic. Epic does many things and Medicomp does one thing, and that is to build a clinical data engine. The Quippe Clinical Knowledge Graph works and thinks the way that clinicians do, so that they can use it at the point of care. It has almost half a million clinical data concepts across all domains. We have worked with clinicians for more than 40 years so that when you are thinking about a specific problem, what are the things that you want to see in all the clinical domains — symptoms, history, exam, tests, diagnosis, therapies, comorbidities, sequelae – so that the clinician can consider any condition or diagnosis and immediately see all of the information that their peers and colleagues over the years have told us they would want to see.

Those half a million clinical data points are both computable — because it has a common data model underneath it — and human readable at the point of care. It has over 10 million mappings to the standard terminologies and accommodates all the downstream processes like CQMs, adequate documentation for HCCs, E&Ms, diagnostic view of the record, and things like care management protocols for nurses, physicians, and affiliated health professionals. We’re starting to get into home care.

You’ll start to see over the next six to nine months some of the significant vendors coming out with a hybrid model that uses large language models and ambient listening to capture information, but then uses our Clinical Knowledge Graph to process it, present it to the clinician, and allow them to navigate it and trust it. It’s evidence-based. It’s linked to sources. 

Once you’ve removed the burden of documentation, how do you accommodate the way a clinician thinks and works in supporting all of the requirements that are now being piled on in terms of HTI-1 and now HTI-2, CQMs, et cetera?  We are pretty pumped about the next two or three years and the way the industry will develop from transaction based to clinical care from a data-driven perspective.

Strong point-of-care technology use cases involve surfacing relevant EHR information and connecting the clinician to medical evidence. How will physicians benefit?

It will help with the acquisition of information with the patient’s involvement, what the patient says and then getting that into the record in some form. Ambient listening stuff is doing it in text. Large language models are good at synonymy and summarizing information. But the clinicians at the point of care are some of the most highly educated and trained knowledge workers on the planet. Most of the time, they know what they want to see. They just don’t like looking for it in the record.

The hybrid model that I was talking about a few minutes ago is to remove some of the burden of entry and documentation, perhaps using AI ambient listening, but then giving the physician transparent, citable, and authoritative comfort with what the EHR is giving back to them among all that information that is in this patient’s record – here are the things that I found related to that problem. That’s that Clinical Knowledge Graph that we’ve been building for years. 

Even though the burden of documentation is lightening, the need to find what you need, work on it, and support all those downstream processes quickly, trustworthy, reliable, predictable — that’s where we fit into this whole puzzle that the industry is trying to put together.

Technologists sometimes miss the significant point that physicians don’t need or want automated help for 95% of their patients, who have a relatively small variety of conditions that they treat all day long. Can personalization or customization give physicians what they need rather than what someone else thinks would be helpful?

You have hit on something that we used to say all the time. Tell me what I need to know, when I need to know it, don’t slow me down, and don’t get in my way. If I need help, I’ll ask for it, and it had better come back quickly.

I’ll give you an example. We’re working with a company that is using ambient listening, AI, and large language models to capture documentation. Certified EHRs are required to have a problem list, which is usually in SNOMED or perhaps ICD-10 in older systems. If the clinician is treating a patient who has a known problem, we can use our Clinical Knowledge Graph to tell them what’s in the record that pertains to that problem, symptoms, and history. Give them their standard presentation so they know where to look – they don’t want each encounter to be a new and exciting experience. It’s formatted the way they want. They can find the information that they need that is related to this patient’s problem. 

If there’s a new problem, like the patient has been  having difficulty swallowing, go to the Clinical Knowledge Graph, type in “difficulty swallowing,” and get a list of things and filter the record for that so they can see it. Do it in a format that the clinician has personalized to the way that they practice. Cardiology is looked at something different than an audiologist, for example, or a nurse. There’s customization of presentation, but there’s diagnostically connecting the information, filtering it, and putting it back to them the way they need it when they want it. I’ll call on that knowledge resource when I need to, which as you just said is maybe three to 5% of the time.

You have said that you stopped using the term AI even though Quippe gives the appearance of applying it under the hood.

I was doing a major presentation for a large medical group in Southern California years ago. I showed a differential diagnostic presentation of a complex patient. One of the 200 docs in the auditorium got up, and before he walked out, said, “Why did I go to medical school if you’re going to tell me what you think this patient has? I already knew it before you even started that. We want real intelligence, not artificial intelligence.”

After hearing that a number of times, we said that we aren’t going to talk about that any more. We’re just going to talk about presenting information that works and thinks the way clinicians do, because we have been working with clinicians for over 40 years to build this thing. We took artificial intelligence out of anything that we said, because people found it hard to believe, and physicians particularly found it offensive.

What are the challenges in using technology to reduce physician burnout?

I think having reasonable expectations. If you set the expectation that large language models and artificial intelligence will remove any need to interact with the EHR because EHRs are just a chore and not a tool, you are bound to be disappointed. Approach it as, “What can this technology be used for that lightens my burden and helps to make the EHR a tool, not a task?” One aspect is summarizing the information that is already in the record. It’s starting to do a decent job doing that as opposed to actually entering data in the record. 

If you use the current versions of this technology to enter data in the record, you have to review it, because there’s still a pretty high hallucination rate. It wouldn’t kill you if it was used in an Amazon warehouse and they ship you the wrong product, but if you put a wrong piece of critical information in a medical record, it can have serious consequences. 

Summarizing it for review, great. Specific things that ease the burden not only on providers, but on people who are building solutions for providers. We are using it to reduce the work we have to do. We have 10 million mappings of our half million concepts to the standard vocabularies. That’s a lot of work, a lot of what terminologists would call in-the-trenches grunt work. AI can help reduce the amount of time it takes to find possible matches and then have somebody look at it.

We approach the point of care the same way. Let’s use our engine to filter the stuff coming out of these models, sort of a hybrid model, and make the best use of our evidence-based Clinical Knowledge Graph, along with the output from the large language model. In that hybrid approach, AI is not going to do everything. It will do some things, such as specific point solutions related to a task or process, but it’s not going to completely take care of the patient. Our role in that is giving the clinician a tool that allows them to find what they want, review it, take action on it, and then use AI for the things that it works well for – summarizing a record and looking in it for occurrences of something.

It’s still early in this. You’re going to see a lot of these companies hit the wall when their initial funding runs out. Then you will see some big players succeed and maybe dominate the industry. There will be a couple of new ones, too.

How about technology that addresses burnout in nurses?

We have always thought that in terms of care delivery, a hospital is a high-tech facility that is run by administrators, but operated by nurses. For the actual, on-the-ground patient care, the nurses are the ones who first notice what’s going on with the patient. The nurses are the ones who call in the physician expert when they need to. They are the ones on the front lines.

Holy Name Medical Center in Teaneck, New Jersey — which was on the cover of Newsweek as an epicenter of the initial COVID outbreaks back in April and May of 2020 — put Quippe in during COVID in the emergency department and the critical care units of the hospital, the intensive care units and the cardiac care intensive care units. They noticed that within two weeks, the nurses had more than two hours a day of time freed from documentation, because we had the data points that are needed to support their processes. We had tools that they could use to design what they were doing in accordance with the processes that were already in place. It was just astonishing to us that they were able to do that.

We learned a lot from that, too. We learned the difference between diagnostic care of a patient and coordinated care team care of a patient, because that’s really where nurses operate. They operate as the eyes and ears of the enterprise on the patient, helping to coordinate the care of the whole team. That made us start improving our design processes that people could use with our Clinical Knowledge Graph to accommodate coordination of care among members of a care team, which is now a big topic for HTI-2 coming down the pike in a couple of years. Every time we go into a new environment, we learn what we didn’t know and adapt accordingly.

What are the next steps in interoperability, especially in data quality and interpretation?

Years and years ago, a senator named Ted Stevens from Alaska said, “The internet is nothing but a series of pipes,” and everybody made fun of him. When I read that old quote from him, I thought about interoperability. We now have a governance structure through the QHINs and through TEFCA. We have built the pipes, and the pipes are available. You will be required to send stuff down the pipe. You will be required to receive stuff from the pipe. The challenge will be how you keep from getting overwhelmed by what’s coming down through the pipe. How do you filter it? How do you present it?

You said before that clinicians will tell you that 95% of the time that they know what they need to treat a patient. That same statistic could be applied to that tsunami of information that is coming down as text, codes, pictures, and all kinds of stuff. Filter it so that I can find what I need. We’ve been working with FHIR and other things for about eight or nine years – and now NLP and large language models – to quickly find the information that is needed in that for the particular patient that is being treated. We are excited that the pipes are in place and that the information will start flowing. That gives us a unique opportunity to show what you can do when you have a Clinical Knowledge Graph with 10 million mappings to the standard vocabularies and hundreds of millions of diagnostically connected data points inside an engine. 

It will be interesting to see how the industry responds to this deluge that they are going to get. It’s an exciting time. I think the HIMSS Interoperability Showcase is what, 15 or 20 years old? Finally, it’s real. But it will take some time to iron out the wrinkles to get the exact information that a clinician needs to the point of care so that they can benefit from the content of those pipes.

How will AI affect your products and competitive position?

Our approach is that we are a good solution for the hybrid model that I talked about earlier, for using AI to acquire the information, bring it over, and then allow it to be formatted, filtered and presented. I get a lot of inquiries about using content as training data, partnering with us, acquiring us, or licensing our intellectual property. We are  too busy right now to respond to that, but we see our role — and HTI-1 kind of covers this — as the evidence-based, trusted resource for source information that has been reviewed by clinicians to handle output from large language models and AI.

It’s not clear to me how quickly people are going to believe that AI can do what we do. The people who have looked at our stuff and tried it have said, you guys have something special here. We have a solid, consistent clinical data model underneath it. It’s not just words linked together. We like the opportunity that we have over the next five years.

The big picture is the industry has not yet come to grips with the tools that are needed for an enterprise, or even an individual clinician, to effectively manage chronic conditions like the Hierarchical Condition Categories that Medicare is using for compensation. There’s lots of money and attention flowing into that. If you look at ICD-10-CM diagnoses, about 9,000 to 10 ,000 are relevant and apply to these Hierarchical Condition Categories for value-based payment.  A huge opportunity for us over the next two to three years is that we can review and filter a record to make sure that the documentation is appropriate, is complete, and that product conditions are being identified and effectively managed and adequately documented to pass a Medicare audit.  

Requirements are piling on the industry. HTI-1, HTI-2, TEFCA, CQMs, and quality payment programs. They are all tied to very specific clinical data points, and that’s really our strength. We’re pretty excited about the next three to five years.



HIStalk Featured Sponsors

     

Text Ads


RECENT COMMENTS

  1. If I'm not mistaken, Glens Falls was one of Epic's customers from WAY back, late 80s-early 90s. Cadence perhaps?

  2. I think they're referring to the practice of an EHR vendor using a single API endpoint for all of their…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.