I spent most of this weekend doing something that I really enjoy. Most physicians dread it, IT people tolerate it, and vendors may or may not love it (depending on whether they’re getting paid for it.) I won’t keep you guessing on the riddle of what I was doing – I was running a client’s upgrade project. As a CMIO, I picked up a lot of skills I never dreamed I’d have. It’s fulfilling to use them to help clients who are struggling or who want to take on something that’s bigger than they’re used to handling.
This wasn’t just any upgrade – the client wanted to install new hardware, upgrade the operating system, and upgrade the EHR/billing system all in the same weekend. There are varying opinions on whether that combination of tasks should be done at the same time. Does it make it too hard to troubleshoot? Does it create too much stress for the team? Is it just a bad idea?
Ideally most of us would prefer not to have to do all of these things at once, but unfortunately this client’s parent organization backed them into a corner from a timeline perspective, so we had to make it work. They realized the need to have some project management support so they could focus on other things they needed to complete prior to the upgrade. I was happy to agree, although somewhat nervous about the whole idea.
The client is one that I’ve been working with for some time. Even before I left my full-time CMIO gig, I had done some side consulting work and was therefore familiar with the team’s abilities and work ethic. I knew they had a strong leader – one of those “the buck stops here” types – who wasn’t afraid to roll up her sleeves and get dirty if needed. They also had a proven track record for solid communication and problem-solving. Upgrades of this magnitude aren’t without issues and I strongly suspected that with those factors in place that we would be successful.
The other asset of this team is its culture. They’ve embraced the idea that it’s OK to ask questions even if it seems to be challenging the status quo or questioning someone’s expertise. All the members seem genuinely motivated to deliver a quality product whether that product is software, connectivity, training, or support. They also have relatively thick skins and don’t take things personally, which is my favorite part of working with them. Sometimes the role of the consultant is to turn over every rock and make sure there isn’t anything hiding under it, even if it makes people uncomfortable. I appreciate being able to do my job without any hurt feelings or drama.
This team also has a strong record of aggressive project management, detailed planning, and constant refinement. They’ve done many individual upgrades over the last half-decade and have continually modified their plans to make sure that every detail has been attended to and that they have planned for a variety of contingencies. When they decided they wanted to try this plan, they already had proven methodologies for doing each of the component parts and it was fairly easy to figure out how we could fit them together.
When they first presented their plan, I was impressed. They had data on each of the last several upgrades they had done, including the elapsed time for various steps and a log of what didn’t go as planned as well as the modifications they identified for the future. They also had worked with the various vendors involved to identify potential timelines and to determine whether the combined project was even possible. A review of their documentation showed that the planning was sound, so the next step was to perform a tabletop exercise and walk through all the moving parts to identify any other potential gotchas.
This was several months ago, but I still remember how they walked through it all, talking through each step and verbalizing the handoffs. Several team members also added specific comments on their steps, such as, “… and now I’m going to stop process A, because we know that if we just pause it we’ll have a problem. Process A is now stopped. Clear for the handoff to the DBA.” It was overkill from anything I’d ever seen, but it let me know that they knew their stuff and were ready to tackle something larger. It did feel a little bit though like being in mission control for a spaceship launch, however.
Over the last several months, they’ve performed each upgrade separately in a test environment except for the hardware piece. Although they experienced some performance issues, they were within the expected realm considering that their test servers were several years older than their production servers. They started training end users several weeks ago and ensured that not only did the users demonstrate mastery of the content, but of the support process and troubleshooting steps and downtimes procedures that would be needed if something didn’t go as expected.
Our final test came about a month ago when they received their new hardware and did a complete dry run. There were a couple of glitches, but nothing that couldn’t be addressed. All training was complete the week before last and they’ve been in a code freeze, so all that was left was to review the downtime plan and train a couple of stragglers.
Most of my work with this client has been remote (I do so love working in my fuzzy slippers), but I wanted to be on site for the go-live. They’re in a city that has a lot to offer and I headed to town on Thursday to spend time with friends as well as to make sure I was in position if anything unexpected happened. When we took the system down on Friday evening and the clock started ticking, I admit it was a little bit of an adrenaline rush. I wasn’t prepared for what was next though – this is the most anxious I’ve ever been on an upgrade project. It wasn’t necessarily because I was worried, but because it was so quiet. I’m used to getting phone calls here and there with questions about sticky situations and I wasn’t hearing anything from this client.
When we reached our first pre-scheduled checkpoint call, everything was under control and they were even a little bit ahead on the timeline. I’m not used to working with a team that is this capable and organized and found myself having to come up with strategies to just mentally let it roll. The friends I was staying with have a pool, so I spent the better part of the weekend contemplating the mysteries of various kinds of rafts and floats while waiting for my next checkpoint call.
Everything finished early Sunday morning and we were able to get some end users on the system for quick testing before we released it to the urgent care locations that were just getting ready to open. I have to admit, with all the pool time this is the most relaxed I’ve ever been going into post-live support. The urgent cares represent only 10 percent of the typical user load, however, so Monday morning might be a different story.
We’re as ready as we can be – issue tracking processes are in place, people know where they need to be, the communication plan has been reviewed, and I’ve ordered enough food into the command center to feed an army. Every practice site will get a personal visit from someone on the team at some point during the day, whether there are issues or not. And every visit will be accompanied by a snack basket. Maybe it’s because of my roots (don’t ever go visiting without a covered dish, seriously) but I believe in letting the users know that we care how they are doing and wanted to bring a little something to brighten up the day.
It’s now Sunday evening and the daily close process is running. The nightly backup will kick off soon and I hope everyone is settling in for a good night’s sleep. I’ll let you know later in the week how it goes. Despite its magnitude, this has been a lot of fun.
What’s your favorite kind of IT fun? Email me.
Email Dr. Jayne.