Submit your article of up to 500 words in length, subject to editing for clarity and brevity (please note: I run only original articles that have not appeared on any Web site or in any publication and I can’t use anything that looks like a commercial pitch). I’ll use a phony name for you unless you tell me otherwise. Thanks for sharing!
The views and opinions expressed are those of the authors personally and are not necessarily representative of their current or former employers.
Don’t Exclude Existing CDS Tools from Conversations on Eliminating Diagnostic Error
By Peter Bonis, MD
Diagnostic error is a pervasive and potentially deadly problem. The New York Times article, “For Second Opinion, Consult a Computer?” underscored the significant potential health information technology holds for reducing harm related to an incorrect diagnosis. Several tools have already been developed and ongoing advances in computational science may ultimately produce approaches that surpass the best of human cognitive skills.
Significant challenges remain in achieving such a vision. At present, commercially available tools that can assist in generating a differential diagnosis have not yet proven to be highly effective in reducing the burden of diagnostic error in clinical practice. There are a number of limitations to existing technology and the way in which it can be used into the workflow. In fact, many of these systems received a barely passing grade in “A Follow-Up Report Card on Computer-Assisted Diagnosis—the C+ Grade,” published in December 2011 by the Journal of General Internal Medicine.
Furthermore, helping clinicians achieve a comprehensive differential diagnosis (and ultimately a correct diagnosis) represents only a subset of the opportunity that health information technology has to offer to reduce cognitive errors. Multiple studies have demonstrated that two out of every three clinical encounters generates a question that, if answered, would change five to eight care management decisions each day. Unfortunately, only 40 percent of questions are routinely answered, and sometimes not with the best contemporary medical knowledge. Existing clinical decision support (CDS) tools not only assist clinicians in generating a differential diagnosis, but they also address the broader need for cognitive support in diagnosis and management-related decisions.
CDS allows clinicians to answer approximately 90 percent of their questions. Dozens of studies have demonstrated a link between CDS and clinically substantial changes in diagnosis, management, and acquisition of medical knowledge. CDS has also been directly linked to improved health outcomes, including hospital length of stay and mortality. It has a proven impact on increased quality, safety, and efficiency of care by providing actionable, detailed, evidence-based answers to clinical questions at the point of care.
Proper care cannot be achieved without a correct diagnosis. Better tools and changes to workflow will continue evolving to reduce potentially tragic outcomes associated with diagnostic error. However, the dialogue surrounding what is still evolving – differential diagnosis software – should not overshadow the larger canvas of what is already here – CDS at the point of care.
Peter Bonis, MD is chief medical officer of UpToDate, part of Wolters Kluwer Health.
The Seven Deadly Sins of EMR Success
By Frank Poggio
After some 40-plus years in the healthcare IT world and after reading Vince Ciotti’s extensive history of HIT published in HIStalk during the past year, I asked myself, “What have we learned? What does it tell us?” Or is it just the ramblings of old war horses that can’t stop running down the history trail?
From my years in the trenches coupled with Vince’s extensive anthology, I’ve distilled it down to two simple rules:
- HIT/EMR buyers just love the fair-haired boy or new glamour model.
- Like all glamour models, they have a runway life of about a decade.
Just look at the history, decade by decade (my apologies to Vince for being so brief).
These vendors were or are the dominant top-tier vendors in each decade. Not necessarily in terms of the largest number of installs, but when a major vendor selection was made during that decade, it usually went their way.
Then after about a decade, they start to stumble. Not collapse, but stumble, and it was downhill from there. Maybe in some cases preceded by a long plateau, but soon enough they hit the down slope. Some hit it faster and harder than others, such as HBOC. Others have a very long and slow downhill run, like Siemens (SMS).
Glamour models don’t blossom overnight. It took SMS maybe 10 years to hit their stride and HBOC at least 20 when you include the life cycle of the companies they acquired. Cerner and our new darling Epic started in the 1980s. Not surprising, it takes at least 10-15 years to blossom.
Of course there were and are many second- and third-place vendors such as McAuto, Saint, Baxter, and the various mini system vendors. And there were ones that stayed away from the top tier of the market and focused on smaller facilities, like Meditech and CPSI.
Now why is it that the top-tier glamour model always seems to fatten, then fade? Why couldn’t IBM, SMS, Technicon, and McKesson hang on to the brass ring for more than a decade?
My theory is their demise is in the DNA of HIT/EMR. Nothing lasts forever, least of all top-tier HIT companies. Along with their chosen industry, they are destined to sow the seven seeds of their own destruction. Those are:
1. Constantly changing regulations
The plethora of health care regulations is innumerable. It all started with Medicare and its complex billing and reporting in 1967. Then TEFRA, Price Controls, DRGs, CHINS, RIOs, JCAHO, FDA, CLIA, HIPAA, FLSA, and on and on. Today it’s MU, ARRA, P4P, ACO, HIE, ACA, EBM, Outcomes, and more to come. And that’s not to mention the many state and local regulations starting with Medicaid.
All these mean more software modifications and updates. Every update will generate at least a dozen bugs that will come back to bite you when you are least prepared.
2. Moore’s Law
The law has been great for hardware, maybe not so great for software developers. Just about the time our glamour model has everything together, out comes a new style (technology).
Remember mainframes, minis, micros, dumb terminals, lunch box computers, notebooks, client-server, peer-to-peer, thin clients, fat clients, chubby clients, Internet, Web-based, PDAs, and so it goes? That’s just the hardware. Now add to that a tsunami of software languages and tools. IBM promoted at least 20 languages and core development tools during its healthcare reign. Oracle and Microsoft are not far behind.
3. More installs equals more costly support
As the successful company grows, its geographical footprint grows, and meanwhile it extends its application portfolio. All this success makes for more complex and costly support. Things are bound to go wrong, and the market will hear about it. It starts with small pimple, then some wrinkles, and then grows into lesions.
The only way to slow or stop the pox is to significantly invest more in support, fix code problems before they fester, increase quality control, or maybe do a full rewrite. That can take tens of millions of dollars and decades of years as witnessed by Siemens (Soarian) and McKesson (Paragon). And all are non-revenue generating (see Seed # 6).
4. Medicine – science or art or both?
Information technology to automate the science piece can be complex, yet it’s more straightforward than applying IT to the art component. Then add to that the ever-changing nature of medicine. The majority of today’s protocols, procedures, and medications did not exist 10 or 15 years ago. Medicine is a moving target and the information it generates is orders of magnitude beyond 1980. Changing medicine also means more enhancements, more support, and more fixes.
5. Pursuit of the perfect design becomes no design
Some firms get mesmerized by the latest tools, then get caught up in the perfect design syndrome. While they are immersed in designing the perfect evening gown, the glamour model is sent down the runway half naked. Technology perfection becomes the enemy of good. Then after missing too many delivery dates, their back is against the wall and they fall into the next trap: “Code now, ask questions later.” At that point, the downside has arrived.
6. Need for capital, or who’s in charge here?
You need capital to keep your systems up to speed and address all the mammoth medical, regulatory, operational, and technological changes. There are only two ways to get it.
From profits (via installs- see Seed # 3). That gets more difficult as you grow and deal with size and industry changes.
From investors, either private or public. If you prefer private investors, there may not be enough sources. The public stock route has its own unique problems. To keep feeding this monster, you’ll need more and more investments. But after your outside investors are on board, it’s not uncommon for them to have a change of vision, plan, or agenda. It’s a marriage, and like some marriages, you don’t know your real partner until the honeymoon is long over.
7. Pride before the fall
As the glamour model nears the end of the runway, her eyes are blinded by the light and her head is in the clouds (no pun intended). So much so she loses her footing and falls off the stage. In the HIT world, this is usually described as “marketing got way ahead of development.” As an old friend once told me, “When you start eating your own marketing BS, death can’t be far away.”
Any one of the preceding can be assigned to any of our past leading models. In most cases, to more than one. Any one seed can be the beginning of the end, with some more deadly than others. Usually it’s a combination of several that cause our glamour model to fall off the runway.
At this point you may ask, “Who will be the glamour model of 2020?” Stay tuned for the next chapter. You may be surprised.
(Vince’s full HIS-tory series covering over 50 HIT vendors is at http://HISPros.com.)
Frank Poggio is president of The Kelzon Group.
One More Time, With Meaning
By Jonathan Bush
The federal government’s Meaningful Use (MU) incentive program has been getting plenty of ink lately – and not the good kind. I enjoyed reading Reed Abelson’s article in The New York Times a few weeks back, “Medicare Is Faulted on Shift to Electronic Records,” which outed the program’s “vulnerability” to fraud and abuse. It cited the OIG’s report blasting the government for failing to properly police payouts to doctors and hospitals. It got me thinking again about this program – one that’s had doctors lining up to buy EMRs like its Black Friday at Best Buy.
First, let me say that I honestly believe the government’s Keynesian efforts through the HITECH Act to stimulate adoption of the EMR have been noble. I don’t blame them. There was nothing going on. Even if they were just paying doctors to collect data and never send it anywhere (like paying farmers to pour milk out on the side of the road) they’d still have accomplished the desired effect of getting things rolling. I get it.
But as currently conceived, MU is moving providers backwards, investing big money to make caregivers less able to move information across the health system. Billions are being spent by health systems to put doctors on pre-Internet software that doesn’t actually lay the groundwork for sustainable information exchange. As Abelson suggests, the OIG is right to be alarmed. But not just because of the risk of fraud. They should be alarmed because even when obeying the rules, caregivers don’t need to actually connect and send data. They just have to “attest” to having the capacity to do it… someday … hypothetically.
Why is CMS asking for “attestation” rather than actual data? Because they don’t have the sophistication to receive the data. When our service teams attest on behalf of our clients, they have to manually enter data into a CMS website because CMS doesn’t have the technology to receive an electronic download of data from our cloud-based network. The fact that the government can’t implement the very technology that it is demanding of healthcare providers is … awkward.
So what needs to happen? Let’s pay for the fruits of MU rather than for the “attestation” of it. If MU stays as toothless as it is now, then yes, the only way to avoid fraud is to send out thousands of OIG inspectors. But a far cheaper and cleaner way to solve this problem is to pay only for flows of useful data. If they can’t give you the data, they can’t get paid. If the government can’t receive the data, then they shouldn’t be asking for it in the first place. This will quickly stem the flow of wasted dollars into closed pre-Internet systems that will never realize important goals for health information exchange.
It’s time to graduate from well-meaning Keynesian approaches – where the committee decides the test and whoever passes the test can have the money – to a true market-based approach. Receivers who need patient information can define what they need and pay a nominal fee to anyone who sends it to them electronically for the favor of efficiently sending clean, relevant, and meaningful data. Just like it works in banking and every industry other than healthcare. The fees can then come right out of administrative savings, not out of taxpayers’ pockets. The result will be a dynamic, sustainable market for the exchange of clinical data which will help drive down costs and improve outcomes. Now that would be meaningful.
Jonathan Bush is CEO, president, and chairman of athenahealth.
The Department of Duh
By Robert D. Lafsky, MD
We have an elderly couple living at my house now. Oh, right, that’s me and my wife, come to think of it. But because we’re old, we still read the daily paper. And we sometimes amuse each other by writing red pen comments in the paper for the other one to see. (This is kinda like Twitter for you younger readers out there.)
Anyhow, one of my favorite comments is written above something that’s particularly obvious or overdue: the heading “Department of Duh.”
My wife is a civilian, though, so I can’t do that with medical journals. But the elite New England Journal of Medicine sure gave me an opportunity in the December 27 issue with a “perspective” article called “Higher-Complexity ED Billing Codes—Sicker Patients, More Intensive Practice, or Improper Payments?”
Now don’t get me wrong, this is a serious academic piece, based on the recent OIG report on reimbursement categories. It has its own statistical analysis of a representative sample of Medicare ED visits, confirming more use of higher CPT codes in recent years. And it goes through a lot of potential causes, including sicker patients and “an increasingly interventionist ED practice style.”(I can confirm that one—it seems any symptom in the Major League strike zone in my ER here gets an abdominal CT.)
But further on the author talks about the influence of electronic records and the effect of “clickable check-boxes that easily satisfy coding-complexity criteria.” And later, “The EHR may also facilitate improper behavior, such as clicking multiple items in the ‘review of systems’ that patients were not directly asked about.”
As one of my favorite colleagues would often respond, “Gosh, d’ya think?”
We don’t need to or have the space to reargue this and all related points here. But what’s really fascinating to me as a regular reader of NEJM and Annals of Internal Medicine is how little they’ve been dealing with a process that’s been fundamentally changing the practice of medicine at the ground level over the last half decade or so.
NEJM presents the most up-to-date scientific information, but very little about how the applecart of diagnostic thinking is being overturned by the EMR process. Especially in their renowned “Case Records of the Massachusetts General Hospital,” which present a mystery case to the senior expert in the exact same traditional format they used when I started reading them in the 1970s. (OK, they did start using tables for labs sometime in the late 1980s, I think).
The real issue here is the passivity that elite medical thinkers have shown toward the radical transformation of medical records and consequent changes of medical thought processes that have been taking place. There’s a lot more to say about this, but I’d sure like to see that visiting expert professor try to unravel a difficult case using nothing but the printed output from a typical EMR.
File that under Department of Duh.
Robert D. Lafsky, MD is a gastroenterologist and internist in Lansdowne, VA.