Time Capsule: Doctors Mostly Ignore Primitive Clinical Decision Support: Help Them Do Right Instead of Warning Them They Might Be Wrong
I wrote weekly editorials for a boutique industry newsletter for several years, anxious for both audience and income. I learned a lot about coming up with ideas for the weekly grind, trying to be simultaneously opinionated and entertaining in a few hundred words, and not sleeping much because I was working all the time. They’re fun to read as a look back at what was important then (and often still important now).
I wrote this piece in June 2008.
Doctors Mostly Ignore Primitive Clinical Decision Support: Help Them Do Right Instead of Warning Them They Might Be Wrong
By Mr. HIStalk
Anyone who has worked with hospital clinical systems knows that so-called “clinical decision support” for physicians has been a bust in many (most?) cases. You turn on all those high-falutin’ warnings that were the primary reason you bought CPOE. The doctors scream bloody murder at the interruptions. You dial it back, they still gripe. C’mon, Doc, we bought this to help you practice good medicine.
Finally, you learn an expensive lesson. From the doctor’s perspective, the optimal decision support setting for your CPOE system is to shut off everything except (a) dose range alerts at the “you’re about to kill this patient” level, and (b) allergy warnings, which they will still ignore 95 percent of the time. (The docs would have told you that upfront, but clinical systems arousal means not asking questions whose answers reflect imperfect reality).
Your shiny new system might (if you’re lucky) save a patient once or twice a year who would have been in big trouble pre-CPOE. Otherwise, the average patient isn’t getting much benefit. Check your stats – an ignored warning is a useless one.
Today’s clinical decision support is mostly old-school, mainframe stuff, simple lookups and algorithms like “do these two drugs interact” and “this test is not recommended, please use this one instead.” It was designed from the paradigm of finding something the physician might have missed and providing a pesky error message, no different from crude screen edits that catch keystroke errors.
In other words, the computer just tells doctors when they might be wrong instead of helping them be right in the first place. May I repeat? The computer just tells doctors when they might be wrong instead of helping them be right in the first place.
The more useful paradigm might have been, “Let me give you some carefully and intelligently mined information that might help you diagnose or treat, not scold you afterward.”
Today’s hospital systems contain a heck of a lot more useful information than they did back in the 1980s when allergy alerts were hot stuff. The challenge now is separating the good stuff from the noise. What subtle trends are occurring with this patient? What correlations exist that humans might miss? What information from the patient’s longitudinal record could help make the right decision now? Can anything be gleaned from the hospital’s vast database of past therapies and outcomes that would improve this particular patient’s care?
If you’re a vendor looking to gain an edge in next generation’s sales wars, look no further. This is stuff that doctors would actually use. This is information that would decrease clinical variation and help apply cutting edge knowledge to individual cases. Information saves money and lives, so hospitals would pay for that result.
This is not simple programming work or a subscription to some expensive third-party drug database. In fact, making something like this work with yesterday’s architectures would be a big pain. It would require a lot of information about a particular patient, including some not readily available today (like an easy way to specify, “What do you think’s wrong with this guy, doc?”)
That level of guidance would also require customization capability since doctors aren’t interchangeable. Urologists don’t focus on the same information as cardiologists. Doctor A might pay a lot of attention to respiratory data points, while Doctor B might be a blood sugar man. Let each flag suggestions as “helpful” or “not helpful” so the system can learn what to offer next time (yes, computers can learn.)
Once vendors figure all that out, the next logical step would be to tie literature to practice. Journal knowledge shouldn’t be hidden away in libraries and online subscriptions. Specialty content vendors are already evaluating and grading that new information into decision support applications for immediate, routine patient use. That’s pretty cool and would be even cooler if undertaken under the open source banner to encourage broad participation and to minimize financial barriers to adoption.
The industry’s efforts to create electronic clinical decision support have largely misfired, as evidenced by its half-hearted use and unimpressive impact on outcomes. Surely some smart folks out there can come up with a better plan that will help make all these systems worth their cost.