Home » Dr. Jayne » Currently Reading:

Curbside Consult with Dr. Jayne 2/26/24

February 26, 2024 Dr. Jayne 5 Comments

In the US, our love of technology often overtakes our trust of people’s knowledge and expertise. I encountered this on a regular basis in the urgent care setting, where patients demanded testing for conditions that were well-suited to the use of clinical decision support rules. In other countries, clinical decision support rules are accepted – and even expected – as a way of helping patients avoid unnecessary testing and healthcare costs. Some of the most useful and validated CDS rules are those around probability of strep throat, ankle fractures, and pediatric head injuries. However, testing has become a proxy for caring, and if physicians don’t order tests for patients with applicable conditions, those physicians are likely to wind up on the receiving end of low patient satisfaction scores or even hostile online reviews.

I had been thinking about this when I stumbled across a recent article in the Journal of the American Medical Informatics Association that looked at whether explainable artificial intelligence (XAI) could be used to optimize CDS. The authors looked at alerts generated in the EHR at Vanderbilt University Medical Center from January 2019 to December 2020. The goal was to develop machine learning models that could be applied to predict user behavior when those alerts surfaced. AI was used to generate both global and local explanations, and the authors compared those explanations to historical data for alert management. When suggestions were aligned with clinically correct responses, they were marked as helpful. Ultimately, they found that 9% of the alerts could have been eliminated.

In this case, the results of using XAI to generate suggestions to improve alert criteria was two-fold. The process could be used to identify improvements that might be missed or that might take too long to find in a manual review. The study also showed that using AI could improve quality through identification of situations where CDS was not accepted due to issues with workflow, training, and staffing. In digging deeper into the paper, the authors make some very important points. First, that despite the focus of federal requirements on CDS, the alerts that are live in the field have low acceptance rates (in the neighborhood of 10%), which causes so-called “alert fatigue” and makes users more likely to ignore alerts even if they’re of higher importance. Alerts are also often found in the wrong place on the care continuum – they cite the examples of a weight-loss alert firing during a resuscitation event and a cholesterol screening alert on a hospice patient.

They note that alerts are often built on limited facts – such as screening patients of a certain age who haven’t had a given test in a certain amount of time. While helpful in some situations, these need to include additional facts in order to be truly useful; for example, excluding hospice patients from cholesterol screenings. I’d personally note that expanding criteria that underlie alerts would not only make them more useful but would avoid hurtful alerts – for example, sending boilerplate mammogram reminders to patients who have had mastectomies and the like. I’ve written about this before, having personally received reminders that were not only unhelpful but led to additional work on my part to ensure that my scheduled screenings had not been lost somewhere in the registration system. There’s also the element of emotional distress when patients receive unhelpful (and possibly hurtful) care reminders. Can you imagine how the family of a hospice patient feels when they receive a cholesterol screening message? They feel like their care team has no idea what is going on and isn’t communicating with each other.

The authors also summarized previous research about how users respond to alerts, which can differ based on users’ training, experience, role, complexity of the work they’re doing, and the presence of repetitive alerts. Bringing AI into play to help process the vast trove of EHR data around alerts and user behavior should theoretically be helpful, if it can successfully create recommendations for which alerts should be targeted. The authors prescreened alerts by excluding those that fired less than 100 times, as well as those that were accepted less than 10 times during the study period. They then categorized the remaining alerts depending on whether they were accepted or not, then going further to look at features of alerts that were not accepted including patient age, diagnoses, lab results, and more before beginning the XAI magic.

Once suggestions were generated, they were evaluated against change logs that showed whether the alerts in question had been modified during the study period. They also interviewed stakeholders to understand whether proposed alert changes were helpful. The authors found that 76 of the suggestions matched (at least to some degree) changes that had already been made to the system, which is great for showing that the suggestions were valid. The stakeholder process yielded an additional 20 helpful suggestions. Together, those 96 suggestions were tied to 18 alerts; doing the math revealed that 9% could have been eliminated by incorporating the suggestions. For those interested in the specific alerts and suggestions made, they’re included in a table within the article.

In the Discussion part of the article, the authors address the idea of whether their work can be applied at other institutions. From a clinical standpoint, they address conditions and findings that are seen across the board. However, if an organization hasn’t yet built an alert around a given condition, there might not be anything to try to refine. They do note that the institution where the study was performed has a robust alert review process that has been in place for a number of years – a factor that might actually underestimate the effectiveness of the XAI approach. For institutions that aren’t looking closely at alerts, there might be many more found that could be eliminated. The institution also has strong governance of its CDS technology, which isn’t the case everywhere. The authors also note that due to the nature of the study, its impact on patient outcomes and user behavior isn’t defined.

As is with most studies, the authors conclude that more research is needed. In particular, findings need to be explored at a number of organizations or by using a multi-center setup. It would also be helpful to those responsible for maintaining CDS to have a user-friendly way to visualize the suggestions coming out of the model as they’re rendered. It will be interesting to see if the EHR vendors that already have alert management tools will embrace the idea of incorporating AI to make those tools better or whether they’ll choose to leverage AI in other more predictable ways.

Is your organization looking closely at alerts, and trying to minimize fatigue? Have users noticed a difference in their daily work? Leave a comment or email me.

Email Dr. Jayne.



HIStalk Featured Sponsors

     

Currently there are "5 comments" on this Article:

  1. We don’t routinely review our alerts (unless they break) and we constantly make things worse for ourselves.

    Any time there is a root cause analysis someone decides we need an alert to prevent a similar situation in the future, even if the alert is unlikely to be helpful. But the Joint Commission and CMS deities have to be assuaged and new alerts seem to be an acceptable sacrifice on the alter of corrective actions.

    Paradoxically, the worst set of recent alerts came from the Leapfrog group which wants mandatory alerts whenever a pregnant patient is taking a medication. From what we’ve been able to tell, this has made care much worse by: showing up mainly to the obstetricians who already know the patient is pregnant or showing up repeatedly to other clinicians who have made an intentional decision about a medication knowing that the patient is pregnant. The other problem is that the alerts frighten people into inappropriately stopping an indicated medication. For example, in a patient with a significant history of depression, stopping an antidepressant medication can result in recurrent symptoms. However, the risks of continued treatment are typically much lower than the risks of recurrent depression which can compromise the health of the pregnant individual and the fetus. The pregnancy related alerts do a poor job of delineating these factors and the net effect is one of clinician annoyance, greater alert fatigue, and, in some instances, even poorer care.

    Having a greater ability to customize alert firing and alert text based on AI in these situations would be a real benefit!

  2. Regarding alert fatigue, I can assure you this is not just a phenomenon in Medicine. All of Computing is rife with it.

    I am aware of several, very large computer systems with elaborate alerting capabilities. These generate vast numbers of alerts and are routinely ignored due to their low actionable nature, combined with overwhelming alert numbers.

    Alerting systems logic routinely look like this: If Free_Disk_Space < 20% Then Trigger_Alert. Very, very simple logic.

    The problem is, as volume sizes get larger, the actual threshold you want to alert on, goes down dynamically (20%, 18%, 16%, 14%, …). It gets worse! Actual disk utilization patterns, in terms of expanded disk usage? It varies dramatically. Some volumes are extraordinarily stable in terms of their growth patterns, while others vary a lot.

    I actually attempted to create a viable algorithm, to try to capture at least some of this logic. It required creation of a mathematical algorithm I could not formulate. And nothing I was working on provided the supervisory information regarding historical growth patterns. Therefore my initiative failed.

    I have literally never seen alerting logic that was sophisticated at source. The best you ever see is a meta-layer that reprocesses alerts, perhaps attempting to filter out some of the non-actionable stuff. Or adding additional logic to make the alerts more useful.

    Alert fatigue appears to be everywhere.

  3. Re: Demanding unnecessary tests

    I can actually see why people would do this. The general public isn’t going to know what conditions are a good match for CDS rules. They may not even know this is possible. And the clinicians are not so good at explaining situations like these.

    My experience was, a near-continuous rotation of specialists in and out of the care environment. When they were available, often we were not, and vice versa. Meetings with family were a rarity, quite honestly. And family have a lot on their minds.

    The tests appear to offer a straightforward, Yes or No answer to questions. Even if they actually don’t achieve that in practice (due to uncertainty with the test itself).

    Situation when a test is performed. Family member to Clinician: A Test for X was done. What did it show?

    Situation when a CDS rule is used. Clinician to Family member: Statistics show that in this situation, 97.3% of patients have X. Family member to Clinician: OK, but you didn’t actually check my Parent then?

    • Ill second this, and add that the more stories that come out about women in general and especially Black women and overweight women being misdiagnosed and underdiagnosed, and subsequently undertreated or simply untreated for conditions that would otherwise have been manageable had they been properly assessed when they first reported symptoms, the less interest and sympathy I have for hearing doctors talk about “unnecessary” tests. Maybe, in aggregate, the likelihood of me having diagnosis X is low but if you *miss* it the penalty to me — not to you, doctor — is *very very high*.

      • There’s another issue that needs to be discussed. Dr. Jayne mentioned that patients and families ought to trust the Clinicians recommending CDS rules. Frankly, this is problematic.

        It is widely accepted in Medicine, that second opinions are a Good Thing. Is this not predicated on, not a lack of trust exactly, but a recognition of grey areas in information available on the patient, combined with grey areas on appropriate treatment strategies?

        Is it not widely accepted in Medicine, that patients and families ought to speak up, to question, to probe, and even to challenge? Getting to the whole issue of patient rights, and becoming active members in care-giving?

        Speaking as someone who has been there, one of the hardest things was knowing when, where, and how to question. We made multiple mistakes, and regretted some of those mistakes.

        Tests, quite frankly, seemed like one of the few areas we could make clear-cut decisions in our loved-ones best interests. It’s taking an active role. It’s evidence-based. It contributes to the care team without undercutting people.

        In order for my family to accept CDS rules-based treatment, we’d need to understand how those CDS rules were superior to testing. And that patient education component was entirely absent from the clinical environment.

Text Ads


RECENT COMMENTS

  1. Going to ask again about HealWell - they are on an acquisition tear and seem to be very AI-focused. Has…

  2. If HIMSS incorporated as a for profit it would have had to register with a Secretary of State in Illinois.…

  3. I read about that last week and it was really one of the most evil-on-a-personal-level things I've seen in a…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Industry Events

  • An error has occurred, which probably means the feed is down. Try again later.

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.