Home » Interviews » Currently Reading:

HIStalk Interviews Gidi Stein, MD, PhD, CEO, MedAware

February 15, 2022 Interviews 1 Comment

Gidi Stein, MD, PhD is co-founder and CEO of MedAware of Ra’anana, Israel.

image

Tell me about yourself and the company.

I started my career as a software engineer many years ago. I was a VP of research and development and the CTO of several startups in the early 1990s. At some point, I vowed never to do startups again, changed my career course, and went to medical school. I was the oldest medical student in Tel Aviv University. I graduated, specializing in internal medicine, did a PhD in computational biology, and held executive roles in one of Israel’s leading hospitals. MedAware is a software company that uses artificial intelligence and smart algorithms to identify medication-related risks and save lives.

What points in the process of ordering and administering medications are most likely to introduce patient harm that existing systems won’t detect?

The flow that begins with the prescriber ordering the medication, the pharmacy approving it, and then administering it or having the patient visit an outpatient pharmacy — all of these situations are basically covered, in some way, by existing systems. But after the patient is already on the meds, after they are  home or are already admitted, things can go wrong. Laboratory abnormalities are found. Vital signs change. Patients can deteriorate into shock or have acute renal failure or anemia. These changes impact the risk that is inflicted on them by their meds, and some have drug events that are related to the medications that they are receiving. Current solutions are usually not good at tracking this, monitoring these patients, and picking up those risks in the post-prescribing, post-dispensing period. Most of the problems we find are there.

What are the alerting challenges that are unique to smart infusion pumps?

Smart infusion pumps are IV pumps that “know” the medications that are being provided to the patient by that pump. The nurses program these pumps in terms of the medication to be administered, the patient’s weight, the rate, the dose, how long the infusion should take, etc. In each of these steps, there can be a typo, a click of the wrong button, or mis-programming. The current systems are similar to the electronic medical record in being not very good at identifying these risks. The alerts that they generate are mostly false alarms, which drive alert fatigue. It’s similar to what we do with electronic medical records — we know how to identify pump programming errors and do this through our partnership with Baxter.

How do you identify an exception to normal practice to generate an alert?

We assume that nurses, physicians, and pharmacists know their jobs. They don’t need MedAware or any of us to teach them how to practice. But you can be the best poet in the world and still have typos that a spellchecker will find. You can be the best doctor in the world and still need that intelligent spellchecker to identify these typos in prescriptions or the programming of pumps. This is where the outlier piece is more relevant.

We published research two years ago with Sheba Medical Center, a large hospital here in Israel, in which we analyzed the errors that physicians make when they’re tired, overworked, or don’t have specific experience with the medications they are prescribing. Two times, three times, eight times as many errors are made when physicians are tired, overworked, working in an overcrowded ER, and especially when they are prescribing medications that they are not used to prescribing. We’ve seen that more and more with COVID in the last two years.

How does the technology coexist with an EHR to reduce alert burden?

What is unique about our system is that the alert burden is very, very low. Current systems can generate alerts in about 20% of medications or medical orders. We provide less than 2%, almost 1%, of the alert burden. The accuracy of the alerts we provide is very high, more than 85% as compared to less than 5% in the current solutions. In most of the cases, physicians — and we monitor this continuously — change their order following our intervention. Instead of applying rules like current systems, we do something more intelligent in applying more sophisticated algorithms to understand the prescription patterns in each hospital, in each care setting, and identify the outlier behavior as a potential error. These are usually consistent with the physician saying, “Oh, I didn’t mean to do that. I’m going to change that.” We see that every day

Are the EHR alerts suppressed by replacing them with yours?

It depends on the client. It depends on the workflow. In some cases, we completely replace the current systems and we are able to generate very few alerts and change the whole experience of providers. In other cases, we focus more on the pharmacy, where all the medical orders are funneled to, so we’re able to surface the catastrophic problems for the pharmacy to focus on. Our engine can be applied in different settings and in different workflows. It really depends on the client and the setting, even inside infusion pumps.

Does the alerting intelligence use the clinician’s individual patterns, or does it look only at their facility’s collective experience?

It’s more detailed than that. It’s at the level not only of the institution, but of the specific department and boiling down to specific prescribing patterns. It really depends on the amount of data that we have in each institution and our ability to model the “normal” behavior based on this data. The more data we have, the more accurate we can be. We can drill down to more refined accuracy and resolution.

How does an organization analyze their alerting patterns to determine that your system can help?

It’s common knowledge. We don’t have to persuade the customers that the current alert burden is too high and that they are ignoring most of the alerts. The challenge is to persuade them that it’s not necessary — they could do it differently and it could be a different experience for the provider. They find that hard to believe. One of the things that we do in most of our clients is take a little bit of historical data and show them what we find. This is the “aha” moment, because with most of the stuff that we find, they were not aware that it is happening in their own back yard. That easily triggers the “OK, I want this.”

How much of the capability that your system has was made possible by advances in AI, and where do you see AI finding a place in healthcare?

Our solution uses many type of algorithms, from the simplest statistical analysis to really robust AI with deep learning, neural networks, and all the buzzwords that come with it. We use the most sophisticated part of AI for specific use cases, one of them being to identify cases in which the patient receives the wrong meds. Either the physician clicked on the wrong patient or drug was given to the wrong patient.

Understanding the clinical context of the patient and the relevance of this specific medication to that patient’s profile is an extremely hard task to do. We’ve been able, for several years now, to identify and to classify the medication as, is this relevant for this patient, or is this not relevant for this patient? It doesn’t have to be even something dangerous. It could be a two-year-old male child who is ordered birth control pills. It wouldn’t kill him, but he definitely doesn’t need it and it’s a complete outlier for that child. This is an extreme case, but there are a lot of more simpler ones that are hard to detect by anything else than using sophisticated AI. Our point is that we would rather use the simplest methodology to fix the problem, but in some cases, you need something that is more complex.

The use of AI in healthcare will find its place. It’s still struggling. W see very nice solutions in the imaging world where companies identify risks in CTs or MRIs and surface them up to the clinicians that hey, you have pulmonary emboli, CVA, or a critical event that you have missed –put it on top of the file.

The fine line is understanding and comprehending that we are not here to replace the clinicians. We are here to help them make better decisions. We are not here to teach them medicine. We are not here to tell them what to do. Just being that safety net to make sure that they don’t type the wrong thing. This approach can grow into more helping with diagnosis and procedures and providing a better prescribing and platform for clinicians, as long as we don’t even think or say that we can replace them or do their job, because that just doesn’t make any sense,

Where do you see the company in the next few years and the use of technology like yours in healthcare?

We have developed a unique engine that can be applied in different places in the industry. Our strategy on the business front is to partner with larger companies that have embedded solutions — in medical devices, decision support, or anything in the medication delivery space — where we can make their data smarter. We can make their systems and devices perform better. This is the path of growth to the company going forward. Baxter is one example. We have more that are coming and the future is looking good.



HIStalk Featured Sponsors

     

Currently there is "1 comment" on this Article:







Text Ads


Recent Comments

  1. I literally cannot imagine any circumstances where the replacement of VistA was not troublesome. VistA was custom designed for the…

  2. Care from the "Home Care" industry, housecleaninig, companionship, etc, is trying to move into the Hospital at Home space, but…

Tweets

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

Sponsor Quick Links