Fear of scorn from Mr HIStalk is so great at Oracle Towers that the webinar recording linked to in the…
Curbside Consult with Dr. Jayne 12/7/20
My adventures as “just another physician” continued this weekend as our urgent care suffered a crippling EHR downtime.
My location had all of its staffed rooms full, several patients checking in, and a waiting room queue of nearly 20 patients when the EHR began to sputter. At first, it was only certain parts of the system that weren’t working properly, but they were some of the most critical – assessment and plan and medication orders. This of course created havoc in the discharge process. Because the EHR was merely sputtering, we were hopeful that it was a momentary glitch, so we kept trying to execute our workflows.
Eventually the EHR started spitting back truly unwelcome error messages, such as “server disconnect” that progressed to the hated “no servers available.” The dysfunction then spread to the practice management side of the house, with check-in and check-out grinding to a halt. For the staff members who had kept kept their systems up rather than trying to reboot, they at least had access to the tracking board to see what patients were physically in the exam rooms. For those of us who had tried to “turn it off and back on again,” the system was dead in the water and we were unable to access Citrix. (My staff often wonders how or why I even know anything about Citrix, and I must say I owe it all to one engineer who decided to take a young clinical informaticist under his wing.)
As expected, the IT emergency phone line was jammed, leading staff to call other locations to see if the outage was just our problem or everyone’s. We were all in the same unfortunate position, but when asked about instituting downtime procedures, the IT team told us to hold because they were already contacting the vendor. This led to wasted time and frustrated patients as we were trying to discharge patients so that we would have open exam rooms to use for those milling at the check-in desk in a non-distanced fashion.
I asked for a paper prescription pad to expedite discharges, but there was some confusion about where it lived and whether it was in the regular narcotics cabinet, the back stock narcotics cabinet, or the administrative office. One clinical tech started phoning prescriptions to the pharmacies and documenting them on Post-it notes while we waited for our site leadership to get their act together.
We were 15 minutes into this veritable goat rodeo with no update from our leadership when I directed the team to go ahead and pull out the downtime binders so we could start moving patients forward again rather than spinning our wheels over what we should be doing next. It took nearly 10 minutes to pull the binders, and then staff had to read the instructions to try to figure out what to do. There was some disagreement from our site leader about whether we should start the process, which added yet another delay.
Fortunately one of my clinical techs took the initiative to run from room to room and collect names and dates of birth for each patient, which we wrote on Post-it notes that were then attached to two old-school clipboards propped up at the physician work station. The list of physically present patients didn’t fully match the list of patients on the remaining tracking board screens, so we decided to make the clipboards the source of truth. Everyone updated the Post-its with as many facts as they could remember about the patients, and we queried our laboratory devices to provide duplicate results for anyone who had testing recently performed.
That provided enough facts to cobble together the information needed to discharge several patients, although we still had some confusion at the check-out desk as far as collecting payments. I was just happy to have exam rooms in which to install the remaining patients that hadn’t gone back out to their cars to wait, as they had been treated to a bit of a show as staff ran around trying to figure out what to do.
Nearly 30 minutes into the event, which felt like an eternity, we still didn’t have an update from leadership. Having come from a big health system where we lived and died by the strength of our downtime plan, I found that surreal. All the other IT systems were up, so there was no reason they couldn’t be sending email or text updates to each site or to the physicians since they already have groups set up for bulk notifications.
I continued to see patients, Post-it by Post-it, until the clipboards began to clear. Eventually, the system came back up, but not in its entirety. Restoration came in the reverse order of it going down, with medications, assessment, and plan lagging behind. The only way we knew the system was improving was by constant trial and error as opposed to an “all clear” notice from the practice.
Since our downtime policy requires manual entry of all data into the system rather than entry of critical or longitudinal data and scanning of the paper downtime forms for non-critical data, the staff immediately became even more stressed, wondering how they would catch up with a continuing flow of patients coming in the door. All told, it took us almost two hours to fully recover and get everything caught back up.
I don’t know whether this was a vendor failure, a hosting failure, an infrastructure failure, or what. but it’s clear that if there was a fail-over system for downtime, it didn’t work correctly. It’s also clear that we don’t practice our downtime protocols enough, or educate on them enough during training. Of the eight staff working at my site, only two of us have ever been through a downtime, and the others were generally unfamiliar with what needed to happen. Since I don’t play any role in the organization other than as a physician, I’m going to keep my thoughts to myself, but make sure my IT clients are better prepared than what I just worked through.
Experiences like these should be rare, and although they cannot be prevented, they can certainly be mitigated in a way that was better than what happened to us. It’s a good reminder of how critical it is to continue good IT practices, even in a pandemic. The patient experience was certainly less than optimal during the episode, and I hope there wasn’t any compromise in care.
When is the last time your organization practiced its downtime routine? Has anyone tested their backups lately? Leave a comment or email me.
Email Dr. Jayne.
“as far as collecting payments”–How many other service industries would send out bills for this kind of performance?
Recommending Burn In, recent book by P W Singer to see how bad it can get.
The last time we had downtime was due to an unplanned neighborhood power outage. Luckily we daily print out our schedules at very minimum so that our providers can continue to see patients.