Time Capsule: Actual vs. IT-Measured Quality: Giving Data the Benefit of the Doubt When Money’s On the Line
I wrote weekly editorials for a boutique industry newsletter for several years, anxious for both audience and income. I learned a lot about coming up with ideas for the weekly grind, trying to be simultaneously opinionated and entertaining in a few hundred words, and not sleeping much because I was working all the time. They’re fun to read as a look back at what was important then (and often still important now).
I wrote this piece in October 2007.
Actual vs. IT-Measured Quality: Giving Data the Benefit of the Doubt When Money’s On the Line
By Mr. HIStalk
It’s inevitable that hospitals and providers will someday get paid more or less based on how they perform on quality measures. Smart people will create a list of clinical actions that supposedly measure quality, or at least serve as a proxy for it. Follow those standards and you’ll get a bonus (or, for you fellow pessimists, avoid a penalty).
Coming up with standards is hard. Medicine keeps reminding us that it’s an art and not a science. Patient outcomes don’t always bow down obediently to even a well-designed medical cookbook (if they did, all doctors would already be treating patients the same). And if you start paying hospitals to give aspirin for heart attacks, you’d better make sure it adds value.
Still, at least for common chronic diseases, the standards are starting to become clearer and more defensible. Widespread use will prove or disprove their value. They can always be changed to reflect new knowledge.
Once the standards are in place, what’s left sounds easy: just sift through reams of electronic information to see how well providers have followed them. Then, write those checks. However, an editorial in the current issue of JAMA reminds us that data standards are poorly defined.
I don’t think providers will cheat, but I think they will err on the side of getting paid when the information is murky. For example, heart attack patients who smoke are supposed to get advice on stopping. Somewhere in the digital soup lives a data bit. It gets turned on when a nurse checks off a “smoking cessation education offered” item.
But, what does that check mean? (or as the geeks say, what is the metadata?) Does the nurse check the box only when she’s done a bang-up job of patient education, including having the patient demonstrate their understanding? Or, does a “smoking cessation” item pop up from an order set, which creates a task, which creates a “click here to make this item go away” entry on the flowsheet?
Reminder: you get paid for checking the box, not doing a wonderful job.
Hospitals are supposed to give clot-buster therapy to new heart attack patients with 30 minutes of their coming through the door. That means you need a super-accurate recording of the time they came in, plus the actual time the drug started coursing through their veins (not when the order was entered or when a nurse pulled the med from the Pyxis machine).
Reminder: conveniently retrievable data isn’t necessarily the same as clinically relevant data, even though it fits the loose definition of what’s being sought. It’s easier to rationalize that what you have is good enough than to go after something new.
Payers might want doctors to encourage patients to get flu shots. Do they pay them for actually giving it, or just for recommending it? Is it for all patients, or just those who happened to have an appointment at the time of year the flu shot inventory is available?
Reminder: physician payments may be based on a denominator of all patients under their care, not just those who have had an office visit.
I’ve looked at a lot of hospital data, particular that involving medications and treatments, and I wouldn’t trust it in many cases. There’s a lot of variability behind what looks deceivingly black and white to a programmer.
We IT people like the idea of pay-for-performance because we are logical and data-driven. It also provides the comforting illusion that providers who follow checklists will keep us from dying. Where we may get uncomfortable, however, is when we realize that our information systems will be taken as gospel by the check-writers. Deep down, I don’t think we really believe that our information is quite ready for that level of scrutiny.
Now’s the time to review your data and metadata. Most quality measures involve just a few data points: when something happened, what drugs were given or what tests were performed, and what was done when the patient was discharged. If you can comfortably produce that data without crossing your fingers behind your back as to its reliability, then you are ready for data-driven quality measurement.