Home » Interviews » Currently Reading:

HIStalk Interviews Charles Tuchinda, MD, President, Zynx Health

May 5, 2021 Interviews 1 Comment

Charles “Chuck” Tuchinda, MD, MBA is president of Zynx Health, EVP and deputy group head of Hearst Health, and executive chairman of First Databank. Hearst’s healthcare businesses include First Databank, Zynx Health, MCG, Homecare Homebase, and MHK.

image

Tell me about yourself and your job.

I’m a driven physician who is hell-bent on making healthcare better. I  want to figure out how things work and how to innovate, which applies to many things in my life. This weekend, as a random example, I actually tackled my first brake job and successfully replaced the brake pads on an old car.

I’m the president of Zynx and I still have some responsibility over FDB, and more broadly, additional responsibilities across Hearst Health. Zynx has been on a mission since 1996 to improve the quality, safety, and efficiency of care. We help people make better decisions that lead to better health through evidence. That’s something you see playing out in the world today.

How much of a physician’s decision-making can be directly supported by available evidence, and why does medical practice sometimes fall outside available evidence?

This question will continue to grow in terms of the body of knowledge and the evidence that helps us think about what we need to do. 

Let me come at it a few different ways. When you look at our process of processing evidence and synthesizing it, we search across a bunch of different literature sources and we filter these things based upon the quality of the study, the type of study. Often, we are looking at over 13,000 studies, so we read and distill them and then we grade them and prioritize them. Then we generate a core piece of knowledge that we call Zynx Evidence that helps us as a foundation for all of the clinical decision support that we make.

But if I step back away from our process and I think about healthcare overall, there’s just so much information, or I should say data, that is available now. The challenge as a clinician is that you have to synthesize it. There’s so many competing interests. You are expected to practice and handle a high volume of visits. You’re expected to practice with high quality of care. You are measured on whether you can reduce readmission or shorten the length of stay.

As clinicians, we are expected to draw upon so much data and synthesize it so quickly. That calls out for partners, information, and tools to help you be the best version of yourself, to do the best that a clinician can do. In the future, we are going to see clinical decision support continue to advance, first to support the healthcare professionals and elevate their practice, and in the long run, to elevate and empower the average patient to make the best possible healthcare decision.

People talk about gaps in terms of the knowledge base. There will always be gaps, because there’s a frontier of knowledge out there that is growing and expanding. But we live in an era now when a lot of the healthcare information can be captured, stored, and analyzed, so the body of knowledge is going to continue to grow. That will make it more important to understand what the standard is. What do we already know about how to go about and do things in a better way?

How difficult is it for physicians to assign the proper weight to their personal experience with looking at someone else’s research that covers a large population?

It is challenging. I remember medical school very well. I went to Johns Hopkins and was infused with knowledge around what the research and evidence shows, essentially defining the right standard of care, at least in the eyes of the medical school I went to. Then when I went to the floor and started meeting with patients, trying to help people do what I believed was the right thing, based on the way I was educated. That turned out to be a big challenge, getting people to do what is likely to be in their best interests for better health.

You also see that challenge with clinicians. Clinicians have different experiences. When they graduated from school, there was a certain level of knowledge and a certain practice pattern. The challenge is that clinicians and the patients they see influence what they think is the best way to practice. What’s tough is that there’s always people out there doing more research, studying more people, coming up with better ways.You have to look at that, synthesize it, make sure it’s right, and make sure it’s right for your situation. Then if you are constantly trying to improve yourself, you’re going to want to bring that into your practice and your day to day. That’s a challenge that has been described in the literature as something that takes, unfortunately, a decade plus for some new knowledge, from the time it’s discovered, to be put into practice and benefiting a large population.

It’s tough. And when you look at the differences in care and the disparities, it’s not only about knowing the difference between the standard of care and what actually happened, it’s also a lot about convincing people and changing minds and helping them access and make good choices.

Will the less-structured, more timely way that new research and clinical findings were disseminated during the pandemic influence the distribution of clinical information in the future?

Yes, absolutely. The pandemic highlighted the fact that reliable information is more important than ever. In the early days, you saw that the volume and velocity of information coming out had increased dramatically. Lots of headlines and a lot of observations. There was this urgent need for scientific or rigorous medical knowledge. You also saw public health entities trying to make decisions with the best available information they had at the time.

It was this nexus of, I want some good information, but I don’t know if it’s out there. Then a flood of information with unclear significance. That’s when it’s important to trust your process. Go back, look at the source, look at the study design, try to figure out if it’s rigorous. Once you feel like you have distilled a few things that work, the other challenge is getting it into practice. How do people apply it? How do you implement it into their workflow? The pandemic really highlighted that need. It’s a good and a bad thing.

In the early days of the pandemic, a lot of health systems sent some of their staff home. They became productive, worked on some change management type stuff where they said, hey, I’m home, I might not be able to go in at the moment, but I can work on updating the system, or I can figure out a protocol. In several health systems, we saw that people drove change at a much better and faster rate than ever before. That gives me a lot of hope, because if folks have the right information and are empowered to make a change in their practice patterns, they will.

Implementing standardized order sets was a contentious topic a few years ago. Now that the implementation dust has settled, what is the status and future of order sets?

The order set market has evolved dramatically, and Zynx has evolved to match it. We have been partnering with clients to serve their needs. The classic market, when EHRs were being deployed, was to populate the EHR with a lot of point-of-care CDS, your traditional order set, a tool and a content inside the EHR system. But now as people primarily have EHRs deployed, you see a shift to optimizing the information you have, updating it. That means a greater need for collaboration software to drive your clinical teams to work together, to examine the changes that they think that they should put into place, and to make decisions and track an audit trail. 

Zynx provides tools to help do that. We even have a platform where we can interrogate the configuration of an EHR and compare it to our content library to suggest spots where there might be gaps in care or vice versa, like some extra orders that you don’t really need that might be considered waste. Maybe they shouldn’t be done when you’re an inpatient, they should be done when you’re in clinic or in follow-up afterwards.

The new frontier for us is looking at clinical practice patterns, the actual ways that clinicians are taking care of patients. Our content team has written business logic rules to interpret that order stream and identify opportunities where clinical practice patterns may not match the standard of care or the evidence-based interventional suggestions. Those are things that we want to highlight as a way to drive clinicians to change their behavior and get better results.

What is the value of slicing and dicing the universe of aggregated data to allow physicians to do a “patients like this one” crowdsourcing-type review?

I would say that there is some utility to that, although I don’t know if that would be my go-to source of rigorous information to begin with. 

When I look at that type of guidance, I map it out in a way where I first want to look for any sources from well-known publications, from experts, from sources that I believe are free from bias with good, rigorous study designs and see if they have done their best to control and observe an impact related to an intervention. That is your traditional, solid, core, evidence-based recommendation. The reality is that there’s not an evidence-based recommendation for everything a clinician might do, and then you need to look for other ways to take care of patients and decrease variability. You might look for some expert opinion, and short of that, you might start to look at practice patterns that are aggregated.

The danger of going to practice patterns right away and crowdsourcing an intervention is that you are going to propagate common practice. Common practice presumably is OK, assuming that the common practice was a good thing. But it also then means that people are going to be entrenched where they are. If there was a breakthrough or new discovery, that won’t be common practice. That’s why I wouldn’t say you go to common practice first. You would go to whatever the latest and greatest leading evidence would suggest that’s rigorous, and try to change behavior and try to change clinical practice to that. But short of that, go to the experts, And if you’re completely lost, then I would consider looking at what else have other people done and what we know about this path in terms of helping people out.

How should an expert’s gut feeling about what seems to work be incorporated into more rigorous, evidence-based recommendations?

My hierarchy would start with trying to find evidence-based recommendations that are based on the best studies. Short of that, I would go to experts, because they presumably specialize in it, probably have a comprehensive knowledge of the disease process going on or the treatment protocol. Then the common practice piece I would put below that, because experts are outnumbered by just the number of generalists. My worry is that maybe an expert who has studied this, who does know the cutting-edge stuff, has the better way to do it, but it’s not showing up if you use an algorithm to just source common practice. Then you don’t have anything else to go with, I probably would look pretty hard, before just treating someone willy-nilly, to get a good recommendation.

It makes me think of the “do no harm.” I’d rather make sure that the things I’m suggesting are sensible rather than just suggesting random things, which then might start to fall in the category of waste. It’s a hierarchy that I think most clinicians, when they practice, come into. You saw it play out with the pandemic. We saw some early treatments look like they might be promising. I might even argue that they became common practice for a period of time. Then people studied them and realized, wait, this is no better than placebo. This is not leading to a better outcome. Those practices largely died out.

Artificial intelligence seems to be focused more on diagnosis rather than treatment, probably because the diagnosis endpoint is better defined. Do you see a role for AI in clinical decision support?

I’s really early days on artificial intelligence. I’m a huge fan of artificial intelligence, but I want there to be a lot of rigor in it. I worry a little bit about the hype around the shiny new object and the fact that that might sway people to try things before you really know how well it works.

When I look at AI in healthcare, one of the reasons we see it in the diagnostic area is that AI for imaging, in particular, is quite good. That’s built on a lot of imaging research that came from other industries, and when you apply it to healthcare, we get good results. There are thousands of studies that have been reviewed by humans and labeled appropriately, so when you train an AI system on that type of information, you can get and characterize the way it performs rather well.

When you look into other areas, especially around treatment and around maybe other diseases, it’s harder to know, because you want to have a large body of information to validate it against. This is one of the topics that we track very closely at Zynx and across Hearst Health, because we want to really understand how well an AI algorithm might perform and how you can judge that. Do you judge that by knowing the makeup or the composition of the AI algorithm, the layers of the neural network, or do you judge that by the input data that you gave it? When you look at the input data, do you want to have a diverse population of folks with a lot of differences, or do you want to have something that’s more uniform?

All these things are still not quite answered. We don’t have a great standard to prove that an AI algorithm is rigorous and it needs to work on a population that looks like this. I think we’re going to get there soon. We have that in other areas emerging. When you test new drugs, you want to test it on a specific population. They may vary by age. They maybe vary by comorbidity. We need to be doing that type of rigorous testing on the AI algorithms. It’s early days, so I think we are getting a lot of tools implemented. But I’m hopeful that we’ll come up with a good process and then have really good, reliable tools to use.

What is the status of electronically creating and sharing a patient’s care plan, and the challenge of defining who of potentially several types of caregivers is quarterbacking the patient’s overall care?

We are proud that we were recognized by KLAS as being Best in KLAS this year for order sets and care plans. That’s a great honor, and we were rated very highly across all the categories that KLAS surveyed our clients for. We have over 1,200 clients and it’s growing. These health systems use the order sets and care plans to help their clinicians work more efficiently.

When you look at how it works at the point of care with care plans specifically, we help guide the interdisciplinary team on the assessments and the goals that they should set for each patient based on the disease condition and the severity of illness. Then we help them perform the right interventions, the tasks to drive that patient to heal and to do better.

Our future and our innovation work has been around translating a lot of those care plan items to patients themselves. We think that patients could be engaged in their care, and to some degree, do some self-care. That should be aligned with the care plan from the care team. Some of these interventions seem pretty straightforward, like make sure you show up for an appointment, make sure you assess a certain thing, know the goal that your care team has set for you so that you can follow up on that.

We think that by increasing the engagement and the participation of patients themselves, people get to better outcomes and are able to receive care in different venues, not necessarily only in an acute-care hospital setting. I’m excited about that. That’s a new area for us, where we tie the two together. We are looking forward to building that and seeing where that can lead us.

Do you have any final thoughts?

Practicing medicine is pretty tough today. There are a lot of competing interests between quality and volume and reducing readmissions and shortening length of stay. The challenge for clinicians is they are expected to draw upon more data and synthesize more things than they ever have, so there’s a need for tools.

I see a future where clinical decision support will continue to advance and help professionals elevate their practice. Ultimately this is going to make patients healthier, and we’re going to all benefit from it. I wish it was as easy as replacing my car’s brake pads. I mean, that would be great. But healthcare is complex, and there’s a lot of different things that factor into getting a good outcome. But I’m very hopeful.



HIStalk Featured Sponsors

     

Currently there is "1 comment" on this Article:

  1. What we certainly have learned is that as complicated as CDS is, figuring out how to deliver CDS for effective and compliant use is more complicated. Glad we have folks as dedicated, thoughtful, and smart as Dr.Tuchinda working in this area!







Text Ads


RECENT COMMENTS

  1. "Upon learning what I do, several attendees went into some pretty serious rants about how electronic health records have destroyed…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.