I was out with a friend on Saturday, at least until he had to leave to go to a planned downtime event at work. He mentioned that in all the years he had been with his company, it was rare for a downtime or disaster-recovery prep event to go as planned.
Maybe his industry has less tolerance than we do in healthcare, but it got me thinking about the impact of downtime in the patient care environment. The Journal of the American Medical Informatics Association published an article on this recently: “Clinical impact of intraoperative electronic health record downtime on surgical patients.”
Many of us just read the abstracts, and a quick pass yielded some interesting information. Researchers looked at the impact of EHR downtime events lasting more than an hour over a six-year study window. Specifically, they looked at adult patients undergoing surgeries more than 60 minutes in length during an inpatient stay lasting longer than 24 hours. Since it’s hard to do certain kinds of controlled studies on events like this, they matched more than 4,000 patients exposed to one of 176 downtime episodes with 4,000 patients who weren’t similarly exposed.
Looking at the math superficially, this means that the facility was averaging more than 29 downtime episodes a year, each lasting more than an hour. That’s pretty striking – approximately one every 12 days. I’ve never worked in a facility that had that kind of downtime and I can’t imagine the anxiety that clinicians might feel in that situation.
The authors found that although the patients exposed to a downtime event had operating room times and postoperative length of stay that were slightly more than unexposed patients, the 30-day mortality rates weren’t any different. In short, there wasn’t an appreciable link between the length of the downtime event and significant adverse events.
I wondered whether the sheer volume of downtime episodes might have been protective in this facility and decided to dig deeper than the abstract to find out more about the study site. The devil is in the details in this scenario, especially since the data was gathered at the Mayo Clinic. The identified downtimes could have occurred in any of the seven applications considered core clinical systems in support of the operating room. These included the anesthesia information management system, PACS, CPOE, clinical documentation, an integrated clinical viewer, the surgical information recording system, and the surgical coordination system.
Researchers categorized the length of the downtime as well as its impact, whether limited functionality was available or whether it was a complete outage. Scheduled downtime events were excluded as were those less than 60 minutes long. When matching exposed and unexposed patients, the team looked at day of the week as well as time of day to control for any variation in staffing, facilities, and EHR load. The patients were also paired according to surgical specialty, emergency / non-emergent status, and physical status.
The typical downtime was on a weekday between 7 a.m. and noon and was not a complete outage. The most commonly impacted systems were the integrated clinical information viewer, PACS, and CPOE. Surgical subspecialties most commonly impacted included general surgery, orthopedic surgery, and cardiac surgery. The median age of patients was 61 years, with a range of 49 to 71.
Although 30-day mortality wasn’t impacted by downtimes, interoperative duration was about 10% longer for the procedures where there were outages or interruptions. Longer operative times have been linked to greater risks of complications and also can lead to higher costs to the facility. In my experience, this also impacts physician morale, with surgeons who feel their schedules have been delayed becoming irritated and at times agitated. The operating suite is one of the parts of the hospital where the adage about time being money is truly applicable. They also noted a 4% increase in length of stay, which also has cost implications. Both increases underscore the need to have strong plans in place to help staff contend with unplanned downtime.
The authors further conclude that there is a need for future studies looking at scheduled vs. unscheduled downtime and parsing it down to specific systems to determine impacts at a more granular level. They also note the need to look at data from different facilities and healthcare settings. They also identified a limitation in the matching, namely that procedures weren’t matched year by year. Since there are constant changes in surgical technique and significant changes in some procedures, the year could have been a confounder. They also noted that, “In this context, it is not possible to generalize the results of this study at our institution to the potential impact of resilience and specific contingency planning to other hospitals.”
I don’t see other facilities planning to line up to bare their downtime data. Additionally, investigators at other institutions may not have the robust longitudinal downtime data that these authors had access to and they may not have the full cooperation of information technology staffers. I still see hospitals where the culture of fear is alive and well and efforts to study incidents in order to improve processes may still be met with suspicion. There are also those where downtime processes are fairly disorganized and they wouldn’t be suitable candidates for study.
I got a surprise Saturday evening when my friend reappeared unexpectedly from his downtime event. His comment about his company’s events not going as planned was prophetic because they actually canceled the downtime before it even started. It was good for a chuckle, although the theoretical risk of downtime events in the patient care environment is no laughing matter.
I’d be interested to hear what readers think about this EHR downtime study and whether they believe their institutions would be willing to undertake that type of analysis of their own data.
Got downtime? Leave a comment or email me.
Email Dr. Jayne.