Going to ask again about HealWell - they are on an acquisition tear and seem to be very AI-focused. Has…
Curbside Consult with Dr. Jayne 12/27/21
I’ve been working on a project with a new client for the last couple of months, and it’s been interesting to say the least. It’s been a great example for why you need to make sure that you have the right people at the table while you’re proposing the project, while you’re designing the technology, and especially while you’re executing it.
I was brought in initially to be a purely clinical resource, working with an existing clinical team to develop some evidence-based content that would feed an organization’s rules engine. It seemed straightforward until I really got into it.
The existing clinical folks were purely clinical and didn’t have an understanding of what happens when you try to put clinical information into a rules engine and how to think about documenting what needs to happen. They also had an extensive inpatient background, but minimal experience in the outpatient world, which is where the new content was to be used. They were also heavily academic and didn’t have a good understanding of how a busy ambulatory practice runs or how “just in time” the content needed to be in order to make sure the users weren’t overwhelmed. They were willing to learn, though, and once we brought them up to speed with a mini clinical informatics course, they were able to hit the ground running.
We were given some parameters around how we needed to document the content specifications, but since the front-end of the rules engine hadn’t been built yet, we were somewhat at the mercy of the designers to understand how the users would interact with our content. Wireframes were available, but don’t always tell the full story of how a workflow is going to feel or function. As we started presenting our finalized content, it became clear that we had built it to a different level of specificity than the rules engine could support. We had to do some rework to eliminate some of the granularity while maintaining the intent of the clinical rules. Although it was workable, I wasn’t entirely happy with it, but I understand that there often has to be some give and take as software is built.
After several months of development, it was time to do some user acceptance testing and we had our first look at the user interface in action. The physicians really liked the look and feel of how the rules engine connected to the workflow. It was almost seamless, until you got to the point where it was actually running. There were some definite lags in the performance as the application was trying to serve up the content we had built. At this point, the development team asked a question about how it was supposed to work and whether the users were supposed to be documenting by exception or documenting certain elements of the workflow. Since the physicians had been specifically told to build the content to support documentation by exception, this was surprising.
The mismatch in how we expected the content to perform compared to how the developers thought it was going to perform turned out to be the cause of some of the performance issues. This could have been avoided by having more cross-functional discussions earlier in the project, where everyone reviewed the specifications documents together and were able to ask questions directly. I understand the motivation in not bringing everyone together initially, since there were concerns about coordinating schedules, making sure that expensive resources were only used when needed, and that there was an understanding that the project managers were coordinating everything. Ultimately, though, it led to rework, so I’m not sure how much that decision actually supported efficiency or cost savings.
Working together, we found a few things that needed to be adjusted in the content, and they began working on changes to the rules engine. To the team’s credit, they did a quick turn-around, and at our next testing session, the workflows were performing much better. We were all very excited to get it in front of users from one of the organization’s practices and held a very small testing session where everything passed with flying colors. The next step was to release it to a single practice that had agreed to serve as our beta client.
The team had planned to hold a combined testing/training session to accomplish a couple of goals. First, making sure the rules engine performed under stress, then also making sure that the training materials and training strategy met the users’ expectations. We identified a couple of places where the materials could be tweaked for clarity, and the performance lags we had seen in the initial testing environment seemed to be gone. Everything was ready for the move to production a couple of weeks later, but unfortunately the biggest challenge was still yet to come.
The organization likes to roll out new features on a Wednesday since it’s typically calmer for an outpatient practice than a Monday. It also allows users to get used to the content for a couple of days and then have the weekend to recover if the new feature creates a stressful level of change in the workflows. They had asked the physician content creators to be available in case there were questions about the clinical aspects of the rules, so we had all blocked our schedules and were ready for the big day. It’s always exciting to see something become reality after it’s been largely theoretical for so many months.
Unfortunately, on Monday, the execution phase of the project started falling apart. Apparently, the operational leaders that the project manager had been talking to hadn’t mentioned that two of the practice’s busiest physicians had planned to take the week off to attend a conference and wouldn’t be present for the go-live. The team was happy to support whoever was available to go live, but we knew that there would likely be budgetary concerns about having the entire support team, including the physician content team, available for a secondary go-live with the remaining members of the practice. We couldn’t just push the go-live back a week because there were concerns about the physicians being busier than normal coming back from being gone for the conference.
Pushing two weeks into the future would put us in the middle of Thanksgiving week when key staff would be out of office, and then the next week would be a post-holiday week with a potential volume surge due to having been closed. Following that, the schedule was peppered with absences due to pre-holiday vacations, followed by the Christmas holiday, and more planned vacations. Having that failure of operational communications has now caused the go-live to be pushed from early November into late January, which isn’t what anyone expected, and in the mean time the lead developer announced that he had accepted a position elsewhere.
It remains to be seen how the rollout will go if we ever get to it. Failure to have the right people at the table cost us initially with the development process and then on the operational side. Looking at the root causes of the communications failures, I’m not sure the project ever had the right level of executive sponsorship to keep it on track or to ensure people were giving it the focus it deserved. As we all know, there’s no test like production, so everyone is eager to get things moving so it can be rolled to the rest of the organization. I’ve already started another engagement with a different client, but I still want to see this one through, so hopefully the January 2022 date will hold.
How does your organization handle shifting timelines? Leave a comment or email me.
Email Dr. Jayne.
Could they not have bought an off the shelf cloud based system? Seems to me that 80% of the problem is the “we are building this ourselves even though we are a health system not a software company” attitude
Your post did an excellent job of outlining the challenges in software design, build, and roll out. All of which encompass incredible amounts of variables. Add the gravity of physicians and the inertia of healthcare in the mix and the calculus becomes very challenging. It’s amazing you got this in one article, I suspect you could have written a tome. Pls keep us posted!
I worked for a very large integrated health system when they were rolling out a well-known EHR product from Wisconsin across 30+ hospitals. The biggest thing this org did to ensure alignment across health plan, providers, and IT was to create a new business unit comprised of key people from those areas. People had new bosses and knew very clearly that their continued employment depended on success. This greatly improved communication and coordination. The org managed to stick to their rollout schedule once it started, in part by making it clear to all involved directly or peripherally how important it was to stay on schedule. I was on the edges of this program but learned key talking points like ‘it will cost us $1M if a hospital has to miss their deployment window’. They explained that this was the case because all providers and other users of the EHR would gave to be trained twice: once for their original window and again months later when they finally got to go live. The whole organization applied the focus and supplied the resources to hit the dates all the way to the end. The kind of communication gaps you describe are far too common. Project managers can’t be expected to be omniscient and to understand the impact of all the assumptions being made by various people.