Home » Dr. Jayne » Currently Reading:

Curbside Consult with Dr. Jayne 11/18/19

November 18, 2019 Dr. Jayne 3 Comments

A former colleague of mine reached out recently, frustrated by a physician in his organization who is demanding that clinical decision support features in some applications be turned off. He was asking for tips to help counter the argument.

It turns out that the physician in question believes that if the application presents you with guidelines that you ignore, you are liable. Fortunately, it’s a pretty easy counterargument. If a guideline exists and you ignore it, regardless of whether it’s in your application, you are liable. In many cases, if a guideline exists and you don’t know about it but a physician would be reasonably expected to know about it, you are liable.

The whole point of clinical decision support is to bring those guidelines  — which you may or may not be familiar with or incorporating into your practice — to the point of care so you can react to them. Of course, this assumes that the clinical decision support in question is accurate and appropriate.

Since crossing into the realm of clinical informatics more than a decade ago, my clinical activities have been limited. This is partly by choice (realizing that I can’t do justice to the traditional primary care paradigm when practicing on a very limited schedule) and partly due to workforce economics. Unless you’re a physician administrator at an academic institution or your CMIO situation includes a specific carve-out for clinical care, it’s unlikely that someone wants to hire you to see patients one day a week.

Since the scope of my practice is relatively limited, one might think it would be easier to keep up with the knowledge base, but it’s still very challenging. I remember a couple of years ago when one widely-used antibiotic fell out of favor for a particular condition. It was a good six months before one of my go-to journals reviewed the primary article and another three months before I actually read it, meaning that I was prescribing a less-than effective medication for a good nine months before I knew any better. What if there could have been clinical decision support at the point of care, which would have alerted me to the fact that the antibiotic selected was no longer recommended for the diagnosis I had entered?

Conventional wisdom is that medical knowledge doubles approximately every eight years. Physicians graduate from medical school and are then trained in residency by physicians who might have been in practice anywhere between one and 60 years. One would expect great variability in those teaching physicians’ knowledge bases as well, which is another plus for clinical decision support.

There are a number of pros and cons around whether clinical decision support should be regulated and how that might impact shifting liability. Others voice concerns about whether this will lead to so-called cookbook medicine or encourage mental laziness among physicians. Regardless of the strength of decision support or whether it’s regulated, physicians still have a duty to determine whether the recommended course of care makes sense or if there are any concerns about the recommendations.

Physicians need to understand where the recommendations found in clinical decision support systems originate. Are they from well-known guideline producers, such as the US Preventive Services Task Force, the Centers for Disease Control and Prevention, the American Cancer Society, or the American College of Obstetricians and Gynecologists? Are they just automated and exposed guidelines that are doing simple checks against diagnosis codes, SNOMED codes, LOINC codes, and medication codes, or are they using artificial intelligence or machine learning?

Rand Corporation blogged about this issue way back in 2012, and the thoughts around it haven’t changed significantly. Straightforward clinical decision support, such as drug-drug interaction checking is great, but alerts have to be at the right level for a physician to highlight the most critical cases while preventing alert fatigue. Users who click through alerts without reading or digesting them will continue to be at risk for increased liability in the case of a poor outcome.

Oregon Health & Sciences University’s Clinical Informatics Wiki covers this issue as well. It notes that, “As long as 25 years ago it was realized that availability of computerized medical databases would likely erode the local or community standard of care.”

Changes to the community standard of care might not be a bad thing. Many of us believe patients should be treated the same whether they live in the city versus rural areas and regardless of differences in income or demographics. However, there have been pockets of the country where physicians were held to a different standard for a variety of reasons.

Take the PSA test for prostate cancer risk. At a time when the US Preventive Services Task Force was specifically recommending against testing (in part because of the number of false positive tests leading to unnecessary biopsies and other downstream consequences) my community performed them across the board because a leading urology researcher at a local academic institution drove expert opinion that they should be done. If you didn’t do a PSA and a patient turned out to have cancer, you were in for a bumpy ride.

OHSU notes correctly that state laws have lagged behind current technology and that the scope of the legal medical record varies from state to state. I’ve worked in organizations that swear that the final signed chart note in the EHR is the legal record, and others who said, “everything in the database is the legal record.” I’ve worked with attorneys going down SQL rabbit holes trying to figure out what a physician knew and when based on various timestamps, user IDs, and other metadata.

The wiki authors also note the need to better understand how clinical decision support systems influence clinician judgment and how their use might impact those who are “not adept at system-user interfaces.” They also note the relative lack of case law in the area, but go on to say that, “Physicians are likely to be held responsible for the appropriate use and application of clinical decision support systems and should have a working knowledge of the purpose, design, and decision rules of the specific decision support systems they use.”

For some EHRs and related systems, this is easier than others. I’ve seen systems where you can quickly drill down to the specific recommendations and understand why a flag was thrown. I’ve also seen systems where alerts don’t seem to make sense and searches of well-known physician resources fail to shed light on the subject (nor do simple Google searches, so a double dead end). The bottom line remains, however, that regardless of the volume of information out there, physicians are expected to know the answers and do the right thing for their patients.

How does your organization address liability for clinical decisions, whether human-created or prompted by technology? Leave a comment or email me.

button

Email Dr. Jayne.



HIStalk Featured Sponsors

     

Currently there are "3 comments" on this Article:

  1. A decade or so ago, I worked in a health system in which the Risk Management Dept weighed into the decisions around alert settings (based on a similar physician complaint). The EHR system at the time had the flexibility to set alerts (like drug-drug interactions or drug-dosing) by discipline, so the the brilliant
    recommendation was set threshold low for the docs (no alerts to trigger), but set high for pharmacists (they triggered all the time). The idea was the pharmacists will call the doc if it is really important! Needless to say, alert fatigue and pharmacist burn-out became high and physicians became irritated at the not-well timed interruptions. I can only hope that they re-visited that decision as they swapped out their EHR.

  2. A really good area to raise for discussion. From my own experience, the informatics team needs to have a focus on the 5 CDS rights (https://sites.google.com/site/cdsforpiimperativespublic/cds), but also on the “lifecycle” of and alert intervention to ensure that the intervention remains “current” and clinically relevant. This is often lacking in some systems from my experience as it is a significant organizational commitment to do this effectively. It require having clinical ownership of the CDS intervention, so it necessitates having clinical subject matter experts and/or a medical literature review process engaged in maintenance in an ongoing fashion.
    As to the question on liability, I would certainly defer to legal guidance but in my opinion it is the duty of the implementor to ensure that the CDS intervention remains clinically “appropriate” and end users should help on this by actively engaging with the informatics team to surface any questions or concerns about the clinical relevance of a given CDS intervention.
    In the end as physicians it is our duty to use all available relevant clinical information to make the best clinical decisions for our patients.

  3. Clinical Decision Support has got to be health IT on hard mode. All of the normal IT challenges + a high stakes environment with the most demanding users. People who work on it are commendable. I’m also glad that I don’t.

Text Ads


RECENT COMMENTS

  1. Going to ask again about HealWell - they are on an acquisition tear and seem to be very AI-focused. Has…

  2. If HIMSS incorporated as a for profit it would have had to register with a Secretary of State in Illinois.…

  3. I read about that last week and it was really one of the most evil-on-a-personal-level things I've seen in a…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Industry Events

  • An error has occurred, which probably means the feed is down. Try again later.

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.