Home » Interviews » Currently Reading:

HIStalk Interviews Peter Pronovost MD PhD, Johns Hopkins University

February 11, 2008 Interviews 6 Comments

Peter Pronovost

I was hopping mad when I read that an obscure HHS group had put an end to Peter Pronovost’s US projects involving using simple checklists like “Wash your hands, wear a mask” to remind physicians to help prevent hospital infections, especially since those projects continued in other countries and absolutely saved lives when used. The project’s data collection, even though it did not involve identifiable patient information, was claimed by the Office of Human Research Protections to violate patient consent requirements (notwithstanding the fact that the project was funded by AHRQ, the government’s reseach and quality agency). A fabulous article in The New Yorker is worth a careful read before proceeding here. Peter is the medical director of the Center for Innovation in Quality Patient Care and a professor in the Department of Anesthesiology/Critical Care Medicine at Johns Hopkins University’s School of Medicine. Thanks to Peter for explaining the project to HIStalk’s readers. This is some of the most exciting work I’ve heard of in the elusive task of getting proven research into practice quickly and inexpensively.

Let’s start out some background about you and your work.

I’m an intensive care physician and anesthesiologist. I did a PhD in clinical research and, because I had free tuition, I did a joint degree in health policy and management, really focusing on quality of care. My emphasis has been on bringing more robust clinical research tools to quality improvement. In other words, the belief in that if you’re going to make inferences that care is better, they have to be accurate and truthful and do that in a very practical way.

I’m trying to find the sweet spot between what’s being scientifically rigorous and what’s practical. That’s sometimes no easy feat. We’ve been looking at very practical ways or applied research ways to improve quality of care. The way we do this is that Hopkins is our learning lab. We package programs that we think can improve quality of care. We implement and measure them at Hopkins. If they work, we make them in a scalable way and share them with the broader healthcare community, in this case, with the State of Michigan.

We packaged a program to reduce catheter-related bloodstream infections. The results were just phenomenal. We nearly eliminated these infections — saved the state over $200 million a year, a tremendous number of lives. So I think the model of doing rigorous quality is key.

One of the things that we’re struck with is that biomedical research in this country needs to be broadened. It’s a bit too myopic in that we view science as understand disease biology or finding effective therapies, but then whether we use those therapies or how to delivery those therapies safely and effectively is “the art of medicine”. We’re not really looking at that. What we’ve been doing is to say, “Let’s apply the same rigor of science to the delivery of care so, at the end of the day, we can say whether care is better or not.”

Obviously, a lot of folks will want to talk about your “list method.” What was your reaction when you heard that HSS Office of Research Protection decided that it was unethical and said that the program had to stop?

Shocked. I had submitted it to our IRB, who reviewed it and said, “This is quality improvement, not human studies research,” because we’re not collecting any patient-identifiable information. When they came back to say, “No, you should have had this”, it was quite chilling. I don’t know if you saw their latest statement where they seemed to say, ‘You can go ahead and do Michigan now, but if you do any of the quality improvement work and you collect data, that’s research”. The implications of that for any kind of management effort are just profound.

Every hospital does some sort of ongoing quality studies, chart reviews, audits …

If you read their statement, it would seem that all of those qualify as research.

Nobody’s ever heard of that office. Is their ruling final or can HHS come in and say, “You’ve overstepped your limits”?

This hasn’t been played out yet, so I think they’re still sorting out what’s going to happen.

Wasn’t it true that your original work was funded by AHRQ?

Correct.

So you’ve got one government agency paying you to do the work and the other one that says it’s got to be stopped.

Exactly right. Go figure. And you have the Secretary of Health and Human Services, who publicly said that he is for value-based healthcare purchasing, efforts to improve quality and reduce cost – exactly what this program did. This program is like the poster child for what he’s advocating for.

It makes you wonder whether the government’s role is really protecting people. If you asked one of those patients, I’m pretty sure they would say, “Yes, please use the list.”

Exactly. It’s Mom and apple pie. So, who knows. I think the field erupted with concern with OHRP. There’s so many e-mails to Secretary Leavitt or Congressman saying, “This is absurd. What are we going to do about this?”

Let’s hope that reason will win. Tell me how you came upon this seemingly simple idea of consolidating information into a list.

I’m a practicing doc and, most evidence summaries in medical care, like these long 100-200 page guidelines that are exquisitely detailed and summarize the evidence, but they present them in what’s called a series of conditional probabilities or if-then statements, like, “If a fever, yes, if white count, OK.”

The problem is nobody uses them. I read a book by Gary Klein called Sources of Power, where he looked at how people in ICUs and firefighters and fighter pilots think under pressure. What he says is that no one thinks in conditional probabilities. They stick their head in the data stream and they see patterns. I reflected on that and I said, no wonder we never use these things. It’s not how our brains work. Our brains can only have one conditional probability at a time.

I was studying the aviation world and safety and how they made their progress with with checklists and said, that’s it, we need a checklist. OK, let’s take this 200-page guideline and summarize it. Given the data from our telephone numbers, the most numbers of things we can remember are five, plus or minus two. That why our telephone numbers are seven digits.

I said, OK, let’s take these guidelines and pull out the five, plus or minus two, strongest interventions for reducing infections that have the lowest barrier to use, and word them as behaviors. Behaviors are easier to fix than wording things in vague statements. We pilot tested at Johns Hopkins. The results were quite dramatic and we packaged it in the program and the result is history. The results are so dramatic.

I’m sure there’s more to it than, “Here’s a piece of paper with some stuff on it”. How do you operationalize the list and can you replicate that into other types of interventions?

Absolutely. Summarizing a list is one thing. Getting people to use it is a whole other. That requires a behavior change. We worked on giving people strategies to say, “OK, now that you have this evidence, how could you make sure every patient gets this evidence in your hospital?”

We gave them strategies, like standardize what you do. Create independent checks for things that are important, and when things go wrong, learn. So we said, “There are about eight different pieces of equipment that you need to comply with these CDC guidelines — caps, gowns, masks , gloves. Go store all the equipment in one place. Eight steps down to one.” And people really loved that.

We then said, as an independent check, docs, when you’re putting in these catheters, nurses are going to check to make sure you do it. So, nurses, we want you to assist docs and make sure that they do all these things. When we first said it, the nurses said, “Hey, my job isn’t to police the doctors, and if I do, I’m gonna get my head bit off.” And docs said, “You can’t have nurses second-guessing me in public. It looks like I don’t know something.” To which I said, “Welcome to the human race. You don’t know things.”

I pulled all the teams together and said, “Is it acceptable that we can harm patients here in this country?” And everyone said, “No.” So I said, “How can you see someone not washing their hands and keep quiet? We can’t afford to do that. In the meantime, you can’t get your head bit off, so docs, be very clear. The nurses are going to second-guess you. If you don’t listen to what they say, nurses page me any time day or night, they’re going to be supported. There’s really no way around this. We have to make sure patients get the evidence.”

When it was presented that way, the conflicts melted away, because issues became not ones of power and politics, who’s right and I’m a doc and you’re a nurse, but one of the patients.

Is it hard to assemble an inarguable body of concise items to create the list initially?

Let me tell you what our vision is. It does take some effort. It takes probably about a year and roughly $300,000 to produce a program. What that means is to go from a concept: “I want to eliminate MRSA”. To summarize the evidence; to develop practical ways to measure that in the real world that are valid and sound; develop the performance measures; to get a data base in place; to do what I call the technical work.

We view it very much like a form of pipeline. We have a process to say, “Let’s go from idea to program. We pilot test it at Hopkins, and then we launch it to the broader community.” It’s a very scripted process now. We’ve become more efficient at doing it, and we absolutely need to be, but we have a very clear program of how to translate evidence into practice. The concerning thing is that there’s no darned funding for this. NIH doesn’t fund this kind of work. AHRQ’s budget is so anemic that it can’t really do anything. So we end up with all these therapies that we know will work, but patients get them about half the time in this country.

So does the work that has to be done only have to be done once and then you can just basically pick it up and drop it in everywhere?

Generally, it’s so inefficient and so ineffective for every hospital to do their own programs; to do what I call the technical work. Now these programs require both technical work and what we call adaptive work, or culture change. The culture change is all local. So we summarize the evidence of the checklist and then we go into a hospital and say, “OK, given your own culture and resources, how do you make sure every patient gets this?” And they modify it a little bit, but the technical pieces, the evidence supporting the checklist, the way to measure if it works or not, so the data collection – are all standardized, as they should be. So those are the science pieces that are true that the central group develops. But once you develop them, there’s virtually, minimal, marginal costs to put it in a thousand or ten thousand hospitals.

Other than grant funding, wouldn’t there be other sources of funding, either private or that one hospital will get so much benefit that they’ll pay for it and share it?

Certainly there’s some philanthropy that people now have become interested this with the New Yorker article, but unfortunately there hasn’t been much federal funding in it. I believe insurers ought to be funding this because they get a windfall from this. There’s no doubt they reap substantial benefits.

This is a non-profit effort that you’re leading right?

I’m an academic doc at Johns Hopkins. Exactly right.

Nobody making money off this? Basically, you’re looking for somebody to cover the costs enough so you can roll this out, in essence, for free?

Exactly right. I’m an academic doc, so any grant I get’s just off my salary. No one’s making money off of this.

Surely you’ve gotten a ton of publicity?

There’s certainly been a lot of people that say, “Hey I’m interested in this.” We’re certainly working on a number of angles. There needs to be more than a vision. There needs to be a strategy for this that’s saying, OK, lets take pediatrics, let’s take emergency medicine, let’s take OB, let’s take surgery. Let’s make sure we develop a model that translates evidence into practice. We just have to find some financial support to make it happen.

I guess the cynic in me always says that healthcare’s pretty distinctly profit-seeking in most areas. If there’s no money to be made in better treatment …

I’ve had people who want to make money off of this hounding me. I’m getting called by everyone who’s saying, “You’re onto a goldmine here. You saved the state $200 million. It costs $500,000. That’s a great ROI. Let’s go make money on it.” I personally think that some of these things … This is a not-for-profit tool. The initial thing’s funded with public dollars, it ought to be public good that we put in broadly.

Most of my readers are information technology people. I know you’ve done other work other than just “‘the list”.

We did this kind of naively. I think there’s huge information technology potential. One is automating the checklist into the work process. We had a very hard time monitoring compliance with it because it was paper-based; people lose the forms. There’s enormous opportunity. I’m not an IT guru. That partnership, I think, we need to make stronger. We need to partner with IT people because this could be an automated checklist in a handheld or a variety of formats that is used at the point of care.

The other thing that’s information technology that’s striking is, when we go into these large hospitals and ask what their rates of infections are, virtually none of them have the data stored in a queryable database. Its pathetic. One of the things that we did in this Michigan project was we built a Web-based data entry. They put in each month the number of infections and the number of catheter days so we can calculate the rates. We made it scalable so you could click and see what the rate was in ICU 1, what the rate was in all of in all of your ICUs, what the rate was in your hospital, or your health system, or the whole state.

So we created some architecture to underly this. It was really simple. And hospitals loved it because, for the first time, they had the data in a real-time time, scalable database. It just shows how rudimentary our clinical information systems for data quality are in hospitals. Even a hospital like mine, University of Michigan, they’re not stored. We haven’t invested in a database infrastructure to do these things in a scalable way.

I’m just speculating, but lets say a big systems vendor came to you and said,’ We’ll underwrite five of your programs in return for the ability to distribute them either exclusively or not”. Do you ever see that happening, where a vendor would maybe fund some of your work?

I have. A couple of the big health IT vendors have come. I think that’s a great support. You can see that these things are easily built in to an information system. It’s crazy not to. Instead of having all these pieces of paper around, you click onto “Central Line” and here’s the central line checklist. I’m doing palliative care, here’s the palliative care checklist. So, absolutely, I think there’s great potential for that,

The data management, it sounds simple, but there’s very few hospitals, or any, frankly … I can tell you large systems that have won awards for reducing infections. When I say,”So what’s your infection rates?” they say, “I don’t know.” or “It’s stored on this piece of paper or Excel file.” We haven’t invested in data management for quality reporting and we desperately need to.

There are two key success factors for this project. One is that it was evidence-based so the interventions are for sound evidence. But two, that we had valid measures, that docs believed that data. This wasn’t marketing like so many quality improvement projects are, where it’s “Come look how great I am,” but the emperor has no clothes, or the data has no credibility because there’s no quality control. It’s seemingly poor quality and the inferences are probably incorrect, the inferences about whether care got better. Docs believe this because they say, “Yes, it’s standard definition. Here’s the data. You can look at how much missing data you have. Here’s the data quality.”

In many senses, we created a monster in Michigan because now there’s a hunger in these hospitals for a pipeline, but we don’t have the infrastructure to deliver the pipeline. The docs are saying they love this approach, “Peter, you’ve transformed the state”. The hospital CEOs love it. You have their docs, nurses engaged in quality. The results are good. They’re all excited. So what’s next? Could we do the same model for VRE or MRSA and for palliative care and sepsis and for emergency medicine and for pediatrics? We certainly could, but we don’t have financial support. We have the model to create this pipeline. We’re working on it. We just launched, funded by MHA, a safe surgery project that has the same model. We’re going be looking at safety in surgery with some checklists and things like that.

How many of these do you think there could be? Are there enough solid facts?

Hundreds. Think about it. Stroke care, headache care, acute MI care, arrythmia care, asthma care. Our brain can’t remember all these things, so the key is the medical community responded to that by making these 200-page eviddence summaries, but nobody thinks that way so they’re not used in practice. The simple checklist approach conforms with how we think. I don’t want to trivialize it because the reality is, to summarize 200 pages of evidence into five checklists that are worded into behaviors that are practical but yet scientifically sound, takes some trial and error.

That sweet spot is a big part of what our key to success is. It’s what our shop does well, is that all of our people are clinicians, but trained in research methods. We know both the biases and the evidence and the clinical realities and we try to hone in on that sweet spot. Inevitably we get it wrong and that’s why we pilot test it and revise. So what you serve up is ultimately very practical, very scientifically sound, and usable in a variety of types of hospitals.

The biggest problem in medicine is probably getting stuff out of journals to the bedside. Even if this was short term, it seems there’s a lot of opportunity to use this a vehicle to push out recent findings.

Exactly right. We could translate evidence into practice quickly. The investment, from what you see, is trivial. You can use it throughout the whole world. We have formed a partnership with the World Health Organization to help put these things out more broadly.

The implication is that if the list works, the doctors were doing it wrong up until they had that tool. So basically, are they acknowledging that they’re just overwhelmed and can’t do as good a job unless they have some reminders?

I think what we say is, sure, they were part of this. What we’ve done with this is created a system. So yes, they’re human. Their brain doesn’t remember everything like mine or yours doesn’t. So what you’re alluding to and what I saw was that our pre-condition for using a checklist is the humbleness to say, “I’m not perfect.”

Healthcare wasn’t there five years ago and perhaps some physicians still aren’t there now. What we’ve shown is, when you accept that, like in anything in your life, when you acknowledge a shortcoming, it’s very liberating. You say, “I could use this aid.” And we changed the system to make it easier.

That chlorhexidine that I told you about reduces infection risk by half. But most of the central line kits didn’t have that soap. The doctors and nurses didn’t know how to change the purchasing to get it. So I sent a memo to the CEOs at the hospitals in Michigan at said, “There is a soap called chlorhexidine that that cuts infections by half. It costs pennies. Please make sure its in all of your central line kits. I’m going to e-mail you back in a month to make sure you did it.”

I have no authority over them, but what I found was that, when we did focus groups with them, they all knew safety was a problem. They were all committed to doing things to improve it, but they didn’t know what to do and most of them were to scared to say so, because you don’t get to be a CEO without having answers, right? I said, “OK, I’ll make it easy for you. I’ll send you a task every month. A really concrete task to have you go do it.” One of the tasks was putting the soap in. Lo and behold, a month later, the whole state has this soap in.

You’re an anesthesiologist as a specialty. I still would argue today that the most dramatic quality of improvement that’s ever been done, in any area of medicine, was when anesthesiologist got together and said, “Look. This risk of general anesthesia in surgery in absurd, We’ve got to make it better”. How did that come about and are the same sorts of roadblocks that the anesthesiologists figured out how to get around going to have to be overcome again with the rest of medicine?

What allowed that discussion was that humbleness to say, “We make mistakes. We’re not perfect.” A big part of our work was getting docs to reclassify harm. Most people put harm in what I call “the inevitable bucket.” Things happen because you’re sick or you’re old or you’ve had a big operation or you’re really young. That “bad things happen” kind of colloquialism. What we did is to say, “No, I think a lot of that is in the preventable bucket. Let’s reclassify it.”

When we did these infections, docs said, “We’re at the national average and these are the people infected and there’s nothing we can do about it.” I said, “I don’t know if we can do something about it, but what I do know is that we’re not using these five central evidence-based things in all patients. Let’s out a system in place where every patient gets it and lets see how well these rates go. I may be wrong and they may stay exactly the same, but my hunch is most are preventable. So can we agree that this evidence is strong and we’re going to create a system where patients always get this evidence because we owe it to them.” Of course, docs agreed on that and the results were breathtaking. It really opened them to say, “Wow. Maybe most of these are preventable.”

You also mentioned the airline industry, where early pilots were free spirits who eventually saw the benefit of having conformance to accepted rules. Does the same psychological way that it took to get pilots to give up what they perceived to be their independence need be applied to equally headstrong physicians?

Exactly right. That’s the tension that we have. How much evidence do I need to give up my autonomy? We’re still uncertain about that. As an industry, healthcare is grossly understandarized, compared to that pilots have to use checklists or they won’t be flying. Healthcare is still very much like the Wild West or like Chuck Yeager in The Right Stuff, where we have this cowboy mentality and we’re just beginning to accept that standardization is a key principal to making care safe. We need to do that. I think we have, especially among the younger generation of physicians, broad acceptance that they need to standardize. What the field of quality has to mature is, “How much evidence do I need before I take away your autonomy or, at least, put some restraints on your autonomy?”

I think you did an article, study, or consultant work involving computerized physician order entry. And there were some sky-rocketing error rates that occurred after implementation. What was your conclusion from that, since I’ve got a lot of technology readers?

What we saw is after the implementation of POE, errors went up dramatically. Though I think that publication surprised healthcare workers, they really shouldn’t. We learned this from aviation and other industries, that any time you change a system, you may defend against some errors, but you will inevitably introduce new ones. This always happens. You’re going to create new risks.

I think healthcare approached POE perhaps naively in that they simply sought to replicate the paper world in doing work electronically. Even the forms are alike. We want to make it look the same way. What that does is, it introduces new errors that weren’t there. So you’re substituted handwriting errors for, what I call, choosing one for many. Most physician order entries have drop-down lists because we have ten different doses of morphine. We haven’t standardized those yet. It’s a huge issue. We need to.

So predictably, some people are going to click the wrong box when they do that. It’s guaranteed. It’s part of human nature. It’s cognitively predictable that they will click the wrong box. Or we’ll have other types of errors, so that you’re substituting new types of errors. We probably hadn’t reflected on how to defend against those enough. We’re focused so much on learning the technology, replicating what the paper workflow looks like, that we didn’t simulate or say, “I’m going to introduce these whole bunch of hazards and how am I going defend against that?”

And, much of the decision support tools that really would’ve benefited from these technologies weren’t part of the initial systems. They’re developed in later. That’s not to say I don’t believe in technology. I think POE is a great tool, and it needs to be done, but we have to do it wisely with eyes wide open. Like, anytime I put something in, I’m going to introduce new errors. Let’s try to proactively identify these so we can defend against them.

The second, the significant mistake, is that we under invest in training and support for these systems. Learning a system takes a lot of ongoing training and support and risk reduction. So, as in real-time I introduce and I see a new hazard, how am I going to fix this and defend against it?

One of the absurdities that I see with POE now is the amazing amount of waste and ineffectiveness of having every hospital home-grown their own decision support tools for these systems. So Hopkins, the main hospital spending thousands upon thousand of person-hours designing their own order sets and decision support tools. Those things take a tremendous amount of time and person-hours. If you add those up across the six thousand hospitals in the US that are doing this, the collective cost is outrageous. It would almost be like each air traffic control developing their own technology and system and not working together.

So somehow, I think, the industry needs to begin to say, we have to work smarter. It’s inefficient and ineffective for everyone to be doing their own thing for these tools because good decision support takes a lot of work. It’s just like the curriculum or good safety programs. We’re going to break the bank if every hospital has to invest hundreds of teams of people developing their own. But perhaps our inability to do that is emblematic of the cowboy mentality, that we can’t get the docs in one institution to agree, let alone talk among hospitals. It says how understandardized we are. You don’t want have every airline or every pilot developing their own checklist to say, “No, my checklist is ABCD. Your is this.” There’s an industry standard.

My audience is mostly executives and informatics people. Is there any message you’d like to leave them with as far as informatics and technology in healthcare and error prevention?

Sure. I think that the most important message is that no one group can do this alone. There needs to be greater partnership between clinicians, information technology, and methodologist or safety experts or measurement people, so that we can put programs together that could help clinicians use evidence in interventions and evaluate the extent to which they actually improved care. That’s going require the collaboration of all three of those groups.



HIStalk Featured Sponsors

     

Currently there are "6 comments" on this Article:

  1. Great interview! Insightful. Humbling. Invigorating, yet discouraging. People that don’t read this blog are missing out on some real information.

  2. RE: A Scott Holmes…if you can’t find peter pronovost email on the web, or take a stab such as pprovenost @hopkins.edu, then it is a good thing you are afinancial management CEO…

  3. here you go…2 minutes on the web (while working)

    Personal Data
    The Johns Hopkins University School of Medicine
    Department of Anesthesiology and Critical Care Medicine
    600 N. Wolfe Street, Meyer 295, Baltimore, Maryland 21287-7294
    Phone: (410) 502-3231 Fax: (410) 502-3235 e-mail: ppronovo@jhmi.edu

  4. RE: Dr. Pronovost’s comment on Home grown Decison Support systems:

    DiagnosisONE to date has the greatest latitude and longitude of any clinical decision support reflecting over 300 physician years and content worthy to challenge TheraDoc. There are not many comprehensive CDS products out there that could attain Stage 7 of EHR Interoperability with lite-CDS found in many HIS sytem vendors that do not come close to 20,000 – 30,000 rules based expert syntax.

    CEO Dr. Mansoor Khan was kind enough to stop off in the Middle East for one day from Boston to Pakistan to present a two hour presentation of DiagnosisONE. Dr. Khan is off to meet with the Prime Minister of Pakistan to finalize preparation for his pak-PHIN – Pakistan Public Health Network under CDC contract. See the link: http://www.dailytimes.com.pk/default.asp?page=2008%5C02%5C10%5Cstory_10-2-2008_pg7_27

    N.B. – I am in awe of the interview with Dr. Pronovost!







Text Ads


RECENT COMMENTS

  1. I think Disingenuous is confused (or simply not aware of how it has been architected). How control of Epic is…

  2. It seems that every innovation in the past 50 years has claimed that it would save money and lives. There…

  3. Well, this is predicting the future, and my crystal ball is cloudy and cracked. But my basic thesis about Meditech?…

  4. RE Judy Faulkner's foundation wishes: Different area, but read up on the Barnes Foundation to see how things work out…

  5. Meditech certainly benefited from Cerner and Allscripts stumbles and before that the failures of ECW and Athena’s inpatient expansions. I…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.