Applying Lean Startup Principles to Optimization
By Tyler Smith
If you haven’t had the chance to read Eric Ries’ 2011 bestseller The Lean Startup, I highly recommend adding it to your reading list. Typically, I am not a big fan of business literature, but I found the book particularly stimulating, largely because its concepts can be readily applied to that currently hot phase of EMR projects – optimization.
After all, entrepreneurism, Ries insists, is not limited to dorm rooms and Silicon Valley garages. Instead, Ries contends that the processes inherent to entrepreneurism can and should take place in large, established institutions – say large healthcare organizations – via the efforts of "intrapreneurs.” Ries goes on to outline the principles of the lean startup and Ries’ fourth principle of the lean startup – Build-Measure-Learn – provides an excellent framework for the optimization phase of EMR systems projects.
The build-measure-learn feedback loop, according to Ries, is one of the key activities that entrepreneurs and “intrapreneurs” alike must perform. In the build-measure-learn feedback loop, minimum viable products (MVPs) are built by entrepreneurs to test certain product and market hypotheses. These MVPs are launched quickly in order to enable entrepreneurs to gather relevant data fast – prior to making large investments of time or money. Using the data generated by the MVP launch, entrepreneurs must then swiftly validate or refute their hypotheses. If the MVP data does not clearly point to success, then the entrepreneurs must use what they learn about their MVP to iterate by building another prototype based upon a modified or newly formed hypothesis and start the cycle all over again.
Here is an example of how I see the feedback loop being utilized during EMR system optimization:
- Hospital administrators have mandated that population management be the first major undertaking of the optimization team.
- As the first order of business for the population management initiative, the optimization team is tasked with implementing a health maintenance alert mechanism.
- While there are a number of different ways that the activity can be instituted, the optimization team meets and decides that since feedback has indicated that providers prefer mobile alerts to desktop alerts, the team will implement the transmission of daily, HIPAA-compliant text message to providers that will provide the providers with patient specific alerts regarding patient health maintenance.
- Using the small batch approach advocated by Ries, the optimization team implements the text messages for breast cancer screening and HIV screening only (their MVP) with the intention to expand the text message content to other conditions if the MVP is successful.
- After implementation, the optimization team follows up with the end users every few days to check on the initiative, only to learn that most providers aren’t really using the functionality.
- When the team queries staff, they learn that providers are not receiving the daily text message until after having seen the first patient of the day and are complaining that messages are long and cumbersome.
- After reviewing the data, the team must decide whether the whole idea should be scrapped or whether a few tweaks will fix the MVP’s obvious issues.
- The team theorizes that the lack of effectiveness of their MVP is due to lengthy and poorly timed text alerts.
- Based upon their conclusion, the team makes the decision to send shorter messages at 5 a.m. each day.
- The team builds and launches this new MVP and thus the loop starts over.
In many institutions where the build-measure-learn feedback loop is not utilized, optimization projects check off an optimization as complete after Step 4. What appears to be a premature ending of a particular initiative is not necessarily caused by a lack of understanding of the need for follow up, but is often due to the long list of optimizations that need to be executed. Teams falling into this category are often tasked with implementing a large quantity of optimizations or checking off a few high profile optimizations, but not explicitly tasked with actual optimization as the end result.
Teams in this aforementioned category fall prey to what Ries calls vanity metrics. As Ries warns, vanity metrics are sets of data which companies use to bolster their perceived success but do not really measure criteria that contribute to the actual stated goal. Teams tasked with long laundry lists of items to check off are prone to this trap. If simply going through and performing optimizations for a laundry list of topics allows the team to state that they have accomplished x number of optimizations and then tout this metric, but at the same time end users feel as if there has been no real optimization of the system, then this x number statistic is a vanity metric. Teams must avoid the allure of vanity metrics and ensure that a solid feedback loop is in place.
Recently, Dr. Val wrote of EMR, “My initial enthusiasm has turned to exasperation and near despondency.” She cited that she is not sure that simply getting the bugs out will fix the issue. I cannot comment specifically on Dr. Val’s issue, but I can only say that if the bugs are truly ever going to be got out, it is going to require more than checking optimization items off a list. The real optimization is going to come about via a fully robust effort by optimizers to build, measure, and learn. That is why the time is so ripe to apply lean startup principles to optimization.
Tyler Smith is a consultant with TJPS Consulting.