A former colleague of mine reached out recently, frustrated by a physician in his organization who is demanding that clinical decision support features in some applications be turned off. He was asking for tips to help counter the argument.
It turns out that the physician in question believes that if the application presents you with guidelines that you ignore, you are liable. Fortunately, it’s a pretty easy counterargument. If a guideline exists and you ignore it, regardless of whether it’s in your application, you are liable. In many cases, if a guideline exists and you don’t know about it but a physician would be reasonably expected to know about it, you are liable.
The whole point of clinical decision support is to bring those guidelines — which you may or may not be familiar with or incorporating into your practice — to the point of care so you can react to them. Of course, this assumes that the clinical decision support in question is accurate and appropriate.
Since crossing into the realm of clinical informatics more than a decade ago, my clinical activities have been limited. This is partly by choice (realizing that I can’t do justice to the traditional primary care paradigm when practicing on a very limited schedule) and partly due to workforce economics. Unless you’re a physician administrator at an academic institution or your CMIO situation includes a specific carve-out for clinical care, it’s unlikely that someone wants to hire you to see patients one day a week.
Since the scope of my practice is relatively limited, one might think it would be easier to keep up with the knowledge base, but it’s still very challenging. I remember a couple of years ago when one widely-used antibiotic fell out of favor for a particular condition. It was a good six months before one of my go-to journals reviewed the primary article and another three months before I actually read it, meaning that I was prescribing a less-than effective medication for a good nine months before I knew any better. What if there could have been clinical decision support at the point of care, which would have alerted me to the fact that the antibiotic selected was no longer recommended for the diagnosis I had entered?
Conventional wisdom is that medical knowledge doubles approximately every eight years. Physicians graduate from medical school and are then trained in residency by physicians who might have been in practice anywhere between one and 60 years. One would expect great variability in those teaching physicians’ knowledge bases as well, which is another plus for clinical decision support.
There are a number of pros and cons around whether clinical decision support should be regulated and how that might impact shifting liability. Others voice concerns about whether this will lead to so-called cookbook medicine or encourage mental laziness among physicians. Regardless of the strength of decision support or whether it’s regulated, physicians still have a duty to determine whether the recommended course of care makes sense or if there are any concerns about the recommendations.
Physicians need to understand where the recommendations found in clinical decision support systems originate. Are they from well-known guideline producers, such as the US Preventive Services Task Force, the Centers for Disease Control and Prevention, the American Cancer Society, or the American College of Obstetricians and Gynecologists? Are they just automated and exposed guidelines that are doing simple checks against diagnosis codes, SNOMED codes, LOINC codes, and medication codes, or are they using artificial intelligence or machine learning?
Rand Corporation blogged about this issue way back in 2012, and the thoughts around it haven’t changed significantly. Straightforward clinical decision support, such as drug-drug interaction checking is great, but alerts have to be at the right level for a physician to highlight the most critical cases while preventing alert fatigue. Users who click through alerts without reading or digesting them will continue to be at risk for increased liability in the case of a poor outcome.
Oregon Health & Sciences University’s Clinical Informatics Wiki covers this issue as well. It notes that, “As long as 25 years ago it was realized that availability of computerized medical databases would likely erode the local or community standard of care.”
Changes to the community standard of care might not be a bad thing. Many of us believe patients should be treated the same whether they live in the city versus rural areas and regardless of differences in income or demographics. However, there have been pockets of the country where physicians were held to a different standard for a variety of reasons.
Take the PSA test for prostate cancer risk. At a time when the US Preventive Services Task Force was specifically recommending against testing (in part because of the number of false positive tests leading to unnecessary biopsies and other downstream consequences) my community performed them across the board because a leading urology researcher at a local academic institution drove expert opinion that they should be done. If you didn’t do a PSA and a patient turned out to have cancer, you were in for a bumpy ride.
OHSU notes correctly that state laws have lagged behind current technology and that the scope of the legal medical record varies from state to state. I’ve worked in organizations that swear that the final signed chart note in the EHR is the legal record, and others who said, “everything in the database is the legal record.” I’ve worked with attorneys going down SQL rabbit holes trying to figure out what a physician knew and when based on various timestamps, user IDs, and other metadata.
The wiki authors also note the need to better understand how clinical decision support systems influence clinician judgment and how their use might impact those who are “not adept at system-user interfaces.” They also note the relative lack of case law in the area, but go on to say that, “Physicians are likely to be held responsible for the appropriate use and application of clinical decision support systems and should have a working knowledge of the purpose, design, and decision rules of the specific decision support systems they use.”
For some EHRs and related systems, this is easier than others. I’ve seen systems where you can quickly drill down to the specific recommendations and understand why a flag was thrown. I’ve also seen systems where alerts don’t seem to make sense and searches of well-known physician resources fail to shed light on the subject (nor do simple Google searches, so a double dead end). The bottom line remains, however, that regardless of the volume of information out there, physicians are expected to know the answers and do the right thing for their patients.
How does your organization address liability for clinical decisions, whether human-created or prompted by technology? Leave a comment or email me.
Email Dr. Jayne.