Home » Dr. Jayne » Currently Reading:

Curbside Consult with Dr. Jayne 5/13/24

May 13, 2024 Dr. Jayne 17 Comments

I have friends who work for Ascension. The concerns coming out of their facilities, which are offline due to a ransomware attack, are quite serious. It sounds like they haven’t done adequate downtime preparation, let alone preparation for a multi-day incident that has taken nearly all of its systems out of commission.

Mr. H reported in the Monday Morning Update about a patient who left one facility because it had been two days and he still hadn’t been seen by a physician. This is completely unconscionable. I hope regulators step in immediately.

Any hospital leaders in any organizations who are not aware of the current ransomware and cyberattack landscape should be removed from their positions of authority immediately. Hospitals need to be drilling for the eventuality on a regular basis. Not annually, but monthly.

I’ve written about this before, but one of the most serious near-misses of my career, which to me will always be a full miss, occurred during an EHR downtime, when the environment in my facility can only be described as chaos. No one knew where the downtime forms were. They were reluctant to engage downtime procedures due to a misplaced fear of “having to fill out a bunch of paperwork” that is required when they formally call a downtime.

I was working in what was essentially a freestanding emergency department at the time, although it was licensed as an urgent care. Due to that licensing, we could have easily stopped taking new patients while we got things sorted. However, the fear of repercussions from management was too great. They continued to bring patients into the exam rooms, leaving the clinical teams scrambling.

Once I found out what was going on and that we were still taking new patients, I called the downtime and demanded we stop bringing in new patients. There’s no need to worry about diversion, EMTALA violations, or turning people away when you’re not licensed as an emergency facility. I’ve been around long enough and practiced in enough challenging environments, including in a tent and out in the field with no support, to know that sometimes you just have to take charge.

Shame on these facilities that are putting patients at risk through lack of planning, lack of leadership, and focusing on the bottom line instead of focusing on the patients who are in front of them. I hope they’re providing counseling for the clinical team members who are experiencing profound moral injury as they are expected to continue to just do their jobs in an untenable situation. One person who reached out to me described it as a “battlefield” situation.

For those of you who are in administrative positions, I urge you to walk to the front lines in various clinical departments in your facility and start asking questions about downtime. It’s not enough to simply trust the reports that are coming out of planning committees and safety assessment committees. My free consulting advice: you need to put your proverbial boots on the ground to find out whether people know what to do or not. It’s not enough to perform phishing tests and to look at the reports that show that people are becoming less likely to click on sketchy links or to visit dodgy websites. People have gotten really good at watching those cybersecurity videos and picking the correct answer on a bland, five-question test.

What you need to know, though, is that when push comes to shove and someone has taken control of your infrastructure, do your employees know how to see patients? Kind of like when you’re drilling for an inspection by The Joint Commission and you expect everyone to be able to explain the PASS acronym for how to use a fire extinguisher (Pull, Aim, Squeeze, and Sweep for those of you who might not be in the know) you need to ensure that everyone knows how to successfully execute a downtime.

Back to PASS, though. Knowing the acronym isn’t enough. Does your team even know where the fire extinguishers are? If a random person came up to them during a time when there was no inspection, could they verbalize where to find them? A downtime is no different. All staff should be able to articulate what are the conditions that require that a downtime be called, how to initiate a downtime, the various roles of the team during downtime, how to find the “downtime box” or whatever supplies they need to use, what the downtime communication plan is, and how to manage critical patient care tasks in the near-term while the entire downtime procedure is put into place. 

Every single healthcare facility needs to know how it will handle a multi-week downtime. News flash: no one is immune to this, and anyone who thinks otherwise needs to seriously reevaluate their leadership readiness. Our facilities need staffing plans to help workers cope with prolonged downtime, including adequate double-checks and safety procedures to account for the loss of systems we’ve all grown to rely on, such as bar code medication administration (BCMA), allergy and interaction checking, and electronic time-out checklists.

At this point, and especially after the Change Healthcare debacle, no one has any excuse for keeping their heads in the sand and thinking, “It couldn’t happen to me.”

image

This weekend marked the opportunity to cross something off my bucket list without having to leave my home state to do it. The raging Level 5 geomagnetic storms are the first to hit Earth since October 2003, which was a time when I was knee-deep in building my practice and didn’t know a solar cycle from the citric acid cycle. Being involved in amateur radio for the last several years has taught me quite a bit about the former, and every day that I move farther from medical school has allowed me to forget more details about metabolic cycles than I care to admit.

As a “science person,” I’m happy to see this month’s expanded Northern Lights phenomenon capture the attention of so many people. I personally learned that the aurora comes in all different colors besides the most-often featured green. I hope there are children being inspired by it and considering future careers that involve exploring our universe and all the fantastic phenomena around us. Kudos to my favorite college student for capturing this amazing pic.

Were you able to see the aurora, or do you still have a trip to the northern latitudes on your bucket list? Leave a comment or email me.

Email Dr. Jayne.



HIStalk Featured Sponsors

     

Currently there are "17 comments" on this Article:

  1. We should be concerned that the forthcoming MINIMUM cybersecurity standards will lead to many organizations being even less prepared than has been observed in the Ascension hospitals.

  2. Seems like you are angry, and I believe you should be, I certainly am. There is no excuse in this day and age for this kind of event, it should be as rare as performing surgery on the wrong limb.

    I also believe that we should start calling out the harm that has occurred because of these events, and treat them as criminal events. If someone is harmed because of these attacks, that harm goes to the cyber criminals, and where they are state sponsored, to their government. Where someone dies, that should be treated as homicide. Where corporate negligence is apparent, not only should the cybercriminals be held to account, but the CEO and CTO should be held to account.

    They cannot say that they did not know.

    Who is to blame here? The cyber criminals certainly, the government that supports them too. But, how about the vendors who don’t maintain their products, don’t pen test, or put off fixing medium and high vulnerabilities because “new functions”. How about the hospital systems that don’t maintain a threat profile for all of their products and understand what and when vulnerabilities will be resolved?

    The 2018 WannaCry attack on Professional EHR should have sent shockwaves through the industry — it didn’t.

    https://www.hhs.gov/sites/default/files/ransomware-healthcare.pdf

  3. Personally, I think that any healthcare organization that has experienced a cyberattack should have the causalities publicized after a thorough and rapid Independent RCA. Be that the vendors, the hospital organizations, or the individual.

    Knowing that your failures will be publicized. Knowing that it will be done quickly, and by third parties will either make you hunker down or open your kimono and improve. Knowing why another organization failed will be a wake up call to your organization that you have the same vulnerabilities, although I can’t imagine how an organization wouldn’t know at this point.

  4. I agree. My experience has been that failure conditions vary quite a lot. This causes tremendous variations in how the organizations respond.

    It has also been my observation that simply being well-read on the disaster plan, does not adequately prepare you for the disaster conditions. It’s a bit of a shock to realize that when Paragraph 17 on Page 135 says “begin the Call-Outs for staff”, that you have to locate the offline Call-Out list, you don’t know who updates it or when it was last updated, and the office IP phones might not be working anyway!

  5. I’m on board with the outrage re: leaders and institutions that are not prepared for ransomware, though I have to also point out that we lived on paper before Meaningless Use forced us off paper not that long ago, so…life needs to find a way. But, possibly more to that point, I’m surprised, as a HAM, you didn’t also point out the incredible bullet that we dodged this weekend regarding those G5 level storms. This was never going to be the “killshot” that took down the world’s power grid, writ large, but this very easily could have been much more destructive to local power companies or discrete systems (think hosted EMRs). If your Emergency Management team wasn’t sending updates on this storm all weekend, make sure you save some of that lack of prep outrage for those people as well. We have not yet reached solar maximum in this solar cycle, and the ionosphere continues to be impacted by the polar excursion. Even G5 storms shouldn’t produce aurora visible in Florida. A Carrington level event is overdue. The Sun has put us on notice, friends. Shame on us all if we don’t react accordingly.

  6. One thing is very clear from the Change HC failure and now Ascension is the failure on any usable backup. The real solution is of course to stop the hack, but that seems to be a long way off.

    I posted this on the main HISTalk blog but given the enormity of this issue I repost it here:

    Seems to me you could have a backup copy of a segment of the patient EHR (say for all current inpatients plus any one seen as OP in last month?) stored locally, say updated each night. And on the local server a subset of the code needed to do just basic functions. Like orders /results, etc.

    Question is: Why hasn’t any of the big 5 vendors already done this??? and why hasn’t CIOs demanded it??

    • What makes you think that none of “the big 5 vendors” have done this?

      • Since clients of the 2 biggest players have been hit, I guess they don’t have it. And if they have done it, why are they keeping it a secret? Sounds like a heck of a marketing sales tool.

    • It’s a non-trivial job to do this. I’ve seen multiple major HIS’ over the years and only one has done anything like this.

      When you phrase it like, “subset of code”, it sounds easy. However the code isn’t organized this way, the HIS functions aren’t organized this way, and the users business processes aren’t organized this way. It has to be an actual designed subsystem and thus, it looks a bit like an independent module.

      Think of it this way. Imagine you have an Admitting function, a button perhaps. It says, “Click Me to Do X”. It’s an important function and it gets used a lot, but in the Downtime system, the data for that button doesn’t exist (there will be an important reason for this, just accept the premise). The button is still there and so the users click it, because they’ve been trained to do so. Or what if the button has been removed to account for the missing data; now the users are confounded because an important part of their job and responsibility vanished with the removal of the button.

      A Big 5 Vendor, whose name rhymes with “Aspic” has created a downtime system. Their system is called Business Continuity.

    • Frank you have the right idea of a local hosted solution. There are eForms systems where you can view a recent copy of a patient record. Then you can enter information into the eforms (instead of paper) the eforms can upload discrete data when the EHR is back on line. in addition to the obvious patient treatment issues an EHR being down causes there is a tremendous amount of labor needed to enter information from paper forms back into the EHR when its back on line.

      If you had an Air Gapped locally hosted clean copy of the EHR (no existing patient data) new patients could be seen instead of being diverted.

      Unfortunately the resources to run and support local servers at many health systems was probably eliminated with the move to cloud computing.

      Will anyone conduct a follow up study on the ROI of moving to cloud hosted systems factoring in the cost of a cyber attack?

    • In my experience, it’s extremely hard to get heath systems to care about this stuff. It requires a level of organizational competence that they simply don’t have.

      Think about the implementation of having a parallel system that can handle network outages. End users only interact with the system for a few hours a year. There’s no way to effectively train on the system, there’s no way to get end users to care, you have to check that it works in each location regularly, and you have to fight for the budget to handle something that the business thinks shouldn’t happen if you did your job right in the first place.

      Absent being forced by the government, I don’t see US management culture doing anything about the problem.

    • To make such a system, you are hit with important design decisions. Right away!

      Just off the top of my head:

      1). Is the Downtime (DT) system a Read-Only (R-O) copy of the main system, or does it permit data entry?

      2). The Downtime system needs an elaborate synchronization system. If it’s R-O that can be one-way, but if it accepts data entry, the synching must be 2-way. And 2-way synchs open up the possibilities of data conflicts;

      3). The DT system really needs to be arms-length from the host. The more distance you can achieve, the more events the DT system can protect against. This complicates the architecture somewhat, but not enough to avoid the matter;

      4). You are going to have to decide, does this DT system look exactly like, or very close to, the main system? A host mimic has the advantage that it minimizes training needs. However you are going to want users to be Very Clear when they are on the DT system. There are conceptual advantages to deliberately making the UI different;

      5). You need some sort of control system, a switch. Users should either be on the host system or the DT system, but having users on both is usually going to be bad. You don’t want the problems this will entail. OK, so what is the switch? And is it Advisory only, or can the switch actually mandate systems availability?

      6). There’s another matter. Any kind of event that can take out your main HIS? It’s going to severely tax your IT department. The less IT involvement it takes to switch to the DT system, the better.

      7). It cuts the other way too. When the DT system isn’t needed, frankly it’s not going to be a priority. We need to be realistic about this. Thus, the DT system needs to be as automatic as possible. The more setup and maintenance it needs, the more likely it is to NOT be available and ready, when it is needed. You want that DT system to be idiot-proof and drop-dead simple to administer. This is a very high design bar to achieve

      8). There is another matter. Under emergency conditions that have activated the DT system? You’d like a way to extend this system and fix any administrative neglect that may have happened. It may not be realistic, at all! But you want it just the same. The demand for it will have gone through the roof and staff will be realizing that parts of it aren’t working, for whatever reasons.

      See how this isn’t “just a subset”?

      • I’m sure it’s not easy, would take a fair amount of thought as to what functions and how. But if it costs $10 mill or more to recover and a vendor could do it for maybe a cost of $5mill (one time) sounds like a ‘slam dunk’ not to mention the marketing /sales value.

        • Hey, as a Tech guy, I buy into it. Completely!

          But I’ve been around too, and I see the trade-offs. There are opportunity costs to building an elaborate, highly functional downtime system.

          Look, many years ago I pursued an enhancement request for an HIS. It related to how dates were handled and it would have required significant resources to implement. But there would have been a payoff at the end. I could never find anyone who disagreed with my idea, in principle.

          It turned out, my idea wasn’t new. At all. In fact it was old and had been around for many years.

          Every year, it would be on the list of enhancement requests. Every year, it would go up against the list of other enhancement requests. Those other requests were mostly of a Clinical nature and they always got the go-ahead when ranked against my piddly date issue.

          The date issue never went away, but it never got resolved either. User’s functional/Clinical issues always got priority when put up against a Tech issue.

          This seems a relevant comparison. It’s pretty easy to cast any downtime, as a Tech issue. Just give it to Tech to handle, they are the experts after all. And if there are repercussions to the downtime? Tech will take the blame, not the lack of an elaborate Downtime System.

          And if people have regrets about this, after a lengthy and painful downtime? That will pass. Most people will want to move on as quickly as possible. “Just avoid the downtimes” will be the lesson for most (which, conveniently, is a Tech responsibility).

  7. Hacking is rampant. Virtually any business that has computers now needs to be/employ a cybersecurity expert. And every employee needs to be trained to be overly suspicious, while having their computer privileges’ severely limited, or the business is at risk.

    I don’t think the answer is to blame the organizations. Is there more that they could have done? Probably. But at what cost? What effort? If you think the answer to cyber security is to be smarter and faster than every hacker, you are doomed to fail. And thats what we’ve been doing, we fail, every day, every week, enriching the hackers and drawing more of them in.

    Instead, I would prefer the civilized world declare war against these hackers. Target the internet services that they run on. Destroy any crypto currency that allows ransom payments. Punish companies who knowingly assist them. Life imprisonment when they are apprehended. Flip the script so that no country or community finds it worthwhile to harbor them.

    I probably sound like a crazy person, and in this context, maybe I am. But when a hospital or UHC has to pay a $22M ransom because some dumbass used a poor password, I think we can agree that our current tactics aren’t working.

    When a bank gets robbed, we don’t blame the bank for using a vault that was manufactured in 2022, instead of upgrading to this year’s model.

    • This comment is SO CORRECT.

      My assessment? IT Security has been leveraged to the hilt. What can be done, has been done (meaning, within cost, political, and socio-economic constraints). Sure, the odd situation at this or that company is a scandal, but even if we fix that? It’s not going to move the needle in the big picture.

      These are Defensive tactics, and they are maxed out.

      What is severely under-utilized are the offensive tactics. These are police actions, as a rule. Find the criminals, charge them, and jail them (or fines, whatever).

      The intractable problem is, the criminals are using the international powerplay game against us. It’s Realpolitik.

      Russian hackers get shelter within Russia, so long as they don’t attack Russia. Chinese hackers get shelter with China, so long as they don’t attack China. Everyone knows the game. And hosts of hackers? They occasionally make a big show of shutting down a single hacker organization, to maintain a veneer of credibility on the international stage. The hackers themselves know they must do/not do, certain things, to maintain their protected status.

      But overall? I assess that the offensive tactics against hackers are weak and ineffective, overall. It’s like Interpol has been sleeping at the switch since the mid 90’s.

Text Ads


RECENT COMMENTS

  1. This could be a significant step forward in computation. Years ago I read an article on what was required by…

  2. My understanding is that Oracle's new EHR can certify itself.

  3. ChatGPT seems to have handled it well, including translating the measurement units from abbreviations and reading down both handwritten columns.…

  4. I recently digitized a bunch of recipes that my dad left behind after he passed - even though many were…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Industry Events

  • An error has occurred, which probably means the feed is down. Try again later.

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.