HIStalk Interviews Robert Clark, MD, Chief of Pediatric Critical Care Medicine, Children’s Hospital of Pittsburgh of UPMC
Robert Clark, MD is chief of pediatric critical care medicine at Children’s Hospital of Pittsburgh of UPMC. He is a co-author of the newly published article in Pediatric Critical Care Medicine titled “Evaluation of Electronic Medical Record Vital Sign Data Versus a Commercially Available Acuity Score in Predicting Need for Critical Intervention at a Tertiary Children’s Hospital.”
Tell me about yourself and the hospital.
I’m the chief of the division of pediatric critical care medicine at Children’s Hospital Pittsburgh. I’ve been in that role since 2009. I’ve actually been at Children’s since 1992 as a fellow. As part of the responsibilities of pediatric critical care medicine, we oversee the rapid response team or the emergency response team for the hospital, in that we essentially respond to patients in cardiac arrest or patients with critical conditions.
What are the most significant information technologies that contribute to pediatric critical care there?
There is a ton of IT in terms of the EMR. The computer has order entry and recordkeeping and things like that.
The reason we gravitated to an electronic surveillance system is based on the fact that we rely heavily on information technology and the IT to keep tabs on what are very complex patients with a lot of data. Essentially, in the pediatric intensive care unit here, we’re taking care of the sickest patients in western Pennsylvania. There’s a lot of information. We can have hundreds of orders on a single patient a day and we can have 100 lab values for patients a day. If you add in vital signs data and things like that, there’s just megabytes of information that need to be filtered and processed. If we tried to do that just with our trainees and nurses and physicians, we would be in a sea of data without directions. We utilize IT quite a bit.
Your recent journal article concluded that PeraHealth’s Rothman Index surveillance system gave fewer false positives that other types of monitoring. Is it a tough balance to get enough data sensitivity to tell you something you didn’t already know versus issuing false alarms?
It is a challenging balance. The key, really, is that we don’t want to take away the human element of things. A lot of the times when a kid is really sick, it doesn’t take the Rothman Index or a fancy artificial intelligence-based system to figure that out. You can take any competent nurse or competent physician or healthcare worker and you can just look at a kid and know that they’re very, very sick.
The issue comes about, from my perspective, when you have children that you can’t look at them and say something’s going on or that something happens unexpectedly. Those are the ones where I think the surveillance technology is really, really important.
We moved from an old hospital to a new hospital in 2009-ish. Now there’s a 200-bed hospital and the 36-bed intensive care unit. The intensive care unit essentially has a footprint that’s half a city block. It’s really hard to keep track of what’s going on on one side of the block and the patient on the other side of the block simultaneously.
Based on everyone’s level of experience and training, you know which patients you need to keep an eye on, the real sick ones, and they’re right smack dab in the middle of your radar. But what we’re trying to go after with the surveillance system is keeping track of everyone else while we’re focusing in on some of the kids that are really, really sick. The last thing you want to do with a system like that is to overwhelm people with false alerts. You don’t want to be flying over to Bed 1 from Bed 36 when Bed 1 is actually just fine.
It’s essentially a complementary system that doesn’t take away the human trigger. The human trigger is very, very sensitive in picking up these things. But you can’t be everywhere all at once. The addition of the Rothman Index, the electronic triggered system, really complements it to be able to keep track and keep synthesizing data on everyone in the hospital, in addition to the ones that we’re already focused on.
The kids that have already got our attention, we don’t need a surveillance system for that. We need a surveillance system for the ones that are out on the periphery, not on our radar, not expected to have any sort of event that requires any interventions. That’s where the complementary system is valuable.
There are shortcomings related to the sensitivity of the Rothman Index right now, but I think they are offset considerably by the fact that kids with the lowest Rothman Index you could look at and say, whoa, something’s not right here. The two in combination will work really, really well.
But that said, I know the folks at PeraHealth are in agreement with that. We’d rather boost the sensitivity to increase the performance of the system. That is where we’re still working with the folks at PeraHealth to course correct when we put this in place and find out where’s the real sweet spot in terms of being able to detect instances where we need to perhaps intervene without false alarms. I don’t think we’re there yet, but I think we certainly plan on working on it.
Could similar triggers be used to monitor populations, where data analysis might turn up non-inpatients whose data points indicate a potential need for intervention?
I am personally just focusing mainly on the hospital aspect of things right now, but I’ve had conversations with the folks at PeraHealth. Essentially what we want to do is put child health in a cloud.
Right now, if there’s a child that has an issue on the floor, then my iPhone goes off with a little message that someone needs to take a peek at this child. This is really the first step in the surveillance system, focusing on patients that we knew need some type of intervention.
But you can imagine expanding this out to have child health in a cloud. Someone in a clinic in Johnstown, PA, who had integrated with the EMR through the western Pennsylvania children’s health system have a certain combination of factors in their EMR. Somebody in the system gets a little ping — so and so, check out this medical record or maybe give the folks a call or whatever it is. I think that’s really the future of all this.
It’s kind of a needle in the haystack thing. We wanted to focus in on the patients where we knew there would be a signal, essentially started there with kids that have cardiac arrest or critical intervention. But again, expand that out. You can even imagine this child health in the cloud kind of thing where not only do I get an alert when this happens, but the pediatrician — the primary care provider for the child — also gets an alert. Then in the future, the parents can’t be in the hospital with the child, maybe they get an alert, too.
I think it’s really potentially powerful. Not being really deep in the IT world, maybe this is already going on, I don’t know. But the first step for western Pennsylvania is we start with the sickest kids and the sicker kids in the hospital in the healthcare cloud and then we try to expand that to the whole hospital, which is what we’re doing now, at least with the surveillance.
We’re planning to finesse this into multiple other areas. Not just for emergency response, but to say when a child’s ready to go home, when a child shouldn’t go home, notifying pediatricians in the community about the status of their children that are admitted here. Eventually you can envision being able to notify families of important things going on.
You co-authored a controversial 2005 journal article that concluded that the implementation of Cerner increased pediatric mortality at the hospital. What has changed in the past 10 years?
I think the biggest lesson that we learned from that is that the IT people need to talk with the folks in the trenches, honestly. There was this real strong desire to roll this out hospital wide, which was OK for 90 percent of the hospital. It would be absolutely fine and wonderful. You can do little course corrections and you can finesse things on the fly. But we saw a certain population where we really thought that they should do a sequential rollout in the less-acute areas before moving into the intensive care unit.
Had we been in from the beginning, I think we wouldn’t have had these issues. There’s a learning curve with everything and there is a learning curve with implementing computerized physician order entry. We learned a lot about how not to do it. We just thought it was important that we report it because we wouldn’t want other hospitals to go through the same mistakes.
When this new surveillance system came about initially, it was mostly IT and the CMIO working on it. But they contacted potential end users like myself. We said, well, wait a minute, we don’t even know if we will use this. That’s why we wrote the most recent article, honestly. Before we roll this out, before we start buying into it, I want to see how it performs.
After talking with the folks at PeraHealth about the Rothman Index and with our IT people, I sat down with our fellows. We said, let’s do a retrospective study, look at every kid that’s had a cardiac arrest or a critical intervention, and let’s look at the performance of the PRI and see if it’s really something that’s impactful. Lo and behold, it was. It wasn’t perfect. But this was just, how is it performing the last two years of no one paying any attention to it? And it performed pretty well. It performed better than just using the existing set of electronic data that we collect.
It was, again, the launching pad. We’re planning a prospective study where we’ll roll out the system to see if we can get children to the right place to where they need to be in the hospital at the right time. This was the lesson learned in 2005 — get the end users involved, get the people that actually are in the trenches to participate in the development of these sorts of things. This is a good example of why it works really well.
I am skeptical by nature, PRI this or PRI that. So we did the study and it was very objective. There was no bias. We didn’t get a penny from PeraHealth or Children’s Hospital, either. We just took the data, analyzed it, and put our own statisticians independently on it. Like I said, it looks like the performance is decent enough, and the best thing about it is that I think we can make it even better with a few course corrections here and there.
Cmon, publishing and writing about an Only Fans and TikTok user is tabloid news. Its junk news, not up to…