Home » Readers Write » Recent Articles:

Readers Write: Mission Impossible: Transitioning to Value-Based Care with Health IT Solutions

October 7, 2015 Readers Write No Comments

Mission Impossible: Transitioning to Value-Based Care with Health IT Solutions
By Victor Lee, MD


Your mission, should you choose to accept it, is to partake in the nation’s efforts to transition our healthcare system from volume-based care and fee-for-service (FFS) reimbursement models to value-based care.

If you are in clinical practice or hospital administration, chances are that you have accepted this mission. Like Ethan Hunt, what choice did you really have?

Earlier this year, the US Department of Health & Human Services (HHS) announced specific goals for shifting Medicare reimbursements from volume to value. Under this plan, 90 percent of all traditional FFS Medicare payments would be tied to quality or value and 50 percent would be tied to alternative payment models by the end of 2018. What does all this mean?

For background information, see this fact sheet which summarizes the payment taxonomy framework that HHS has adopted to categorize its payment reform programs. Briefly, Category 1 is traditional FFS with no link of payment to quality. Category 2 is FFS with a link to quality which includes pay-for-performance programs such as Hospital Value-Based Purchasing, Readmissions Reduction Program, and Hospital-Acquired Condition Reduction Program.

Categories 3 and 4 include alternative payment models, where the difference between them is that category 3 programs are built on top of an FFS architecture (e.g., accountable care organizations, medical homes, bundled payments), while category 4 programs completely move away from FFS and exclusively involve population-based payments (e.g., eligible Pioneer accountable care organizations in years 3-5).

Now that we’ve characterized the impossible mission, let’s look at some tools you can use along your journey. There are no spy trinkets, laser beams, toxin antidotes, or heavy artillery involved. Rather, I am referring to newer, innovative solutions proven to maximize clinical and financial outcomes such as clinical decision support (CDS) and mobile care coordination.

The Office of the National Coordinator for Health Information Technology (ONC) defines CDS as “a process for enhancing health-related decisions and actions with pertinent, organized clinical knowledge and patient information to improve health and healthcare delivery.” A classic example of CDS is a pop-up alert that provides guidance to clinicians at the point of care. However, the Centers for Medicare & Medicaid Services asserts that there are many other common forms of CDS in addition to alerts, all of which may be used to satisfy the CDS objective within its EHR Incentive Programs. Which ones have you used on your mission?

Admittedly, many providers have already successfully implemented a variety of CDS interventions in their EHR systems or are somewhere along that journey, so the concept of implementing CDS for quality improvement is not new. However, many organizations struggle with keeping CDS updated over time as new information from clinical trials, guidelines, and performance measures emerges.

Fortunately, there are solutions to help with this part of the impossible mission, including third-party evidence surveillance or software applications that analyze CDS from EHR systems to identify potential deviations from evidence-based best practices.

Care coordination has also been part of a national dialogue, with the Agency for Healthcare Research and Quality (AHRQ) including care coordination as one of its six National Quality Strategy priorities. Care coordination is also explicitly required in certain regulations such as Meaningful Use (mentioned earlier) and the Medicare Shared Savings Program, with the latter specifically requiring the use of “enabling technologies” to support care coordination. So clearly the impossible mission is less likely to be completed in the absence of care coordination, but what solutions are available?

A classic example of a care coordination solution is HIPAA-compliant text messaging. However, newer care coordination solutions take this a step further and incorporate person-centered and evidence-based approaches to ensuring safe and timely transitions of care across providers and venues. Some solutions embrace mobile platforms to ensure accessibility at every point of a person’s care journey.

In summary, our nation’s path toward healthcare reform may appear to be daunting if not nearly impossible. However, the HHS prescription for payment reform and its taxonomy for measuring progress toward its goals includes programs that are dependent on lowering costs, promoting care coordination, and optimizing quality of care. Fortunately, advanced solutions are at your disposal today that transform the mission from one that is seemingly impossible to one that is probable if not inevitable.

This message will self-destruct after we have completed the transition to value-based care.

Victor Lee, MD is vice president of clinical Informatics at Zynx Health of Los Angeles, CA.

View/Print Text Only View/Print Text Only
October 7, 2015 Readers Write No Comments

Readers Write: HIEs Deliver the Promise of mHealth

September 28, 2015 Readers Write No Comments

HIEs Deliver the Promise of mHealth
By Stuart Hochron, MD, JD


The successful transition from fee-for-service to value-based care will require a high degree of coordination and the sharing of real-time health information among physicians and patients. This article describes how quality and cost incentives are encouraging payers and providers to leverage the information contained within health information exchanges (HIEs) to empower providers and patients.

Patient outcomes improve when timely personal health information (PHI) is shared with and among providers and their patients. Reducing preventable hospital readmissions is an example of the power of this information. As a result of recent successes in the acute care and post-discharge environment, payers and physicians responsible for the care of populations across multiple EHRs are seeking ways to (a) avoid treatment delays and improve care quality by sharing PHI among clinicians, and (b) engage and empower patients. Mobile communications that engage physicians and patients and deliver relevant clinical information can help healthcare organizations coordinate quality care, manage cost, and satisfy physicians and patients.

Until recently, providers seeking PHI from multiple EHRs were required to access and navigate secure HIE websites using personal computers or mobile devices. Web access has generally been less than a user-friendly experience. The fact that many HIE websites are not mobile enabled and rarely push data to user-friendly mobile apps has further limited physician-HIE engagement. Today, however, mobile technology has given rise to an increasing number of user-friendly mobile apps that integrate with HIEs and push the type of information that physicians and patients find most useful.

All sectors involved in value-based care can benefit from mobile delivery of HIE data. Government benefits when the transition to value-based care is facilitated. Payers benefit by more efficiently coordinating care, containing cost, and facilitating quality and member satisfaction. HIEs benefit by expanding their services. Physicians benefit from easy access to critical information and from financial incentives that derive from effective value-based care. Patients benefit from greater security that results from knowing when and why their PHI is being accessed and by whom.

The following case studies represent mobile HIE initiatives that add value in different ways.

Case 1 – Patient Status Notifications to ACO Physicians

An ACO managed by a hospital system is implementing an automated status notification system for primary care physicians. It provides ACO physicians with the opportunity to participate at an early stage in the care of patients presenting to emergency departments or who are hospitalized. In the absence of such information, treating physicians are deprived of the opportunity to discuss details of the patient’s medical history with the patient’s primary care physician. This lack of communication can to lead to otherwise preventable hospital admissions, over-prescribing of diagnostic studies, iatrogenic complications, lower patient satisfaction, poorer outcomes, and increased cost.

Automated patient status notification takes advantage of hospital-HIE data connections, whereby PHI is uploaded in real time to the HIE when a patient presents to a regional emergency department or hospital. ACO-participating physicians are identified by the ACO and HIE using unique numeric codes.

When a patient is registered by an emergency department or is admitted to a hospital, the HIE identifies the patient as part of the ACO, reconciles the physician identifiers, and feeds pre-selected PHI to the patient’s ACO physician(s). This information includes the patient’s name, DOB, diagnoses, emergency facility location and contact information, and the time of ED registration and/or admission. The message is delivered from the HIE to physicians via an HL7 or sFTP (secure fie transfer protocol) data feed that reaches the mobile vender’s server through a VPN (virtual private network).

In this case, all physicians credentialed by the ACO’s hospital are required to participate in the hospital’s mobile communication platform. The time interval between ED registration or admission and ACO physician mobile notification is measured in seconds. Armed with this information, ACO physicians are able to the share key patient information with treating physicians at remote facilities.

Case 2 – Fraud-Protecting Payers and Patients Using HIE Status Notifications

An HIE seeking to expand the scope of its services is developing a mobile patient app that will fraud-protect state Medicaid and its beneficiaries and engage patients. Fraud-protecting Medicaid beneficiaries has the potential to reduce state and federal government annual losses related to fraud. Engaging patients has the potential to improve outcomes, control cost, and improve patient satisfaction. The system will use the HIE’s mobile patient app to authenticate patients and notify patients in real time when a healthcare facility or provider adds, accesses, or requests access to PHI. Patient access to this information requires a paper-based application and considerable time and thus is rarely requested.

Medicaid patients will self-authenticate using the HIE’s secure mobile app. After downloading the app from either the Google Play store or Apple App Store, patients will register by answering a few simple questions including their name, date of birth, and state of residence. The mobile app will connect to Equifax, the HIE’s consumer credit reporting agency, which will ask patients up to five personal financial questions. Questions can be related to a patient’s cable television bill and other commonly purchased product products and services, which broadens the potential applicability of the authentication process. Once authenticated, the patient’s app is activated and protected by a PIN. Patients can present their activated mobile HIE app when accessing Medicaid services at pharmacies, hospitals, and other facilities to document their identity.

Each Medicaid beneficiary has a unique Medicaid and HIE identifier. When a request by a provider for access to a patient’s health record is received by the HIE or when PHI is added, the HIE will store this information, identify the patient as a Medicaid beneficiary, reconcile the patient’s Medicaid and HIE identifiers, and feed pre-formatted notifications specific to each type of status change to the patient’s mobile app. Examples of patient notifications include 

  • Radiology results have been delivered to your physician’s office.
  • Laboratory results have been delivered to your physician’s office.
  • An admission-discharge-transfer summary from a hospital has been delivered through your HIE.

The patient app will also provide patients with relevant health and insurance information and connect patients to network providers and services.

HIEs, like electronic health records in hospitals and physicians’ offices, are repositories of large amounts of PHI. The goal of realizing value from collecting and storing such data is directly related to how quickly and easily relevant PHI is shared with providers and patients. Relevant PHI that is delivered in real time to engaged physicians and patients has the greatest potential to improve outcomes, control cost, and increase physician and patient satisfaction.

Using HIEs to deliver PHI-related information instantly in user-friendly ways to physicians’ and patients’ mobile devices is delivering on mHealth’s promise of adding value through innovation.

Stuart Hochron, MD, JD is co-founder and chief medical officer of Practice Unite of Newark, NJ.

View/Print Text Only View/Print Text Only
September 28, 2015 Readers Write No Comments

Readers Write: What Do National Patient Identifiers and Donald Trump Have in Common?

August 24, 2015 Readers Write 1 Comment

What Do National Patient Identifiers and Donald Trump Have in Common?
By Catherine Schulten

Over the past several years (decades?), the call for a national patient ID has moved beyond discontented grumblings by hospital CIOs to a hot button topic that has garnered national attention from the likes of CHIME, HIMSS, the US Congress, and practically everyone with an opinion who is involved in healthcare data exchange.

A HIStalk poll conducted 2/8/15 asked, “Should the federal government issue a national patient identifier?” The overwhelming response was yes, as 79 percent said yes while 21 percent said no.

Interestingly, a poll done by the Wall Street Journal asking, “Should patients have unique electronic identification numbers for their medical records?” revealed that 44 percent said yes while 56 percent said no.

Industry leaders who support the use of a national patient identifier point to the use of universal patient identifiers (UPIs) in the UK, Ireland, Canada, and elsewhere. They tout efficiencies gained, increased patient safety, the ability to easily pull together a longitudinal record across disparate systems, lower administrative costs, accelerated medical discovery, and the ability to preserve patient privacy. They also cite patient privacy advocates and the existing ban on any federal funding to study or promulgate a national patient identifier as the reason why no forward momentum on this issue has occurred.

Those opposed to the national patient identifier typically cite two primary deterrents: patient privacy and the role of the federal government in establishing an agency that has the ultimate authority to create, distribute, and manage these identifiers.

But before we get into the pros and cons of each side in this debate, let’s first agree on a few items that seem to be overlooked when we talk about a national patient ID.

First of all, let’s quit calling it a national patient identifier. In practicality, it is actually a national ID. From the moment we are born until the day we die, we all have the potential to be a patient. In all countries that have adopted this type of system, the ID is assigned to the patient by the government at birth. In some cases, not only is this ID used to identify an individual for healthcare purposes, but it is also used when securing other government benefits.

Secondly, healthcare is a service that applies not only to US citizens born in this country, but others who may be here legally or not. Nationalized citizens, foreign visitors, individuals with work or student visas, and even illegal immigrants would need to use the ID. Otherwise, how does one know for sure whether Jean-Luc Picard with an ID and the one without an ID are the same or different individuals? For this design to work, an ID process must be supported for non-US citizens as well.

Back to the question at hand: what do national patient identifiers and Donald Trump have in common?

Both are light on details and heavy on promises. We hear what we want to hear when told that a national patient identifier is the only option that solves for true data interoperability, that privacy advocates and their concerns stand in the way of this enlightened future, and that an ID, once introduced, will be used consistently and accurately.

We seem to forget that HIT systems, no matter how well they claim to be protected, are vulnerable to sophisticated security hacks and low-tech identity theft schemes. We forget that healthcare is a service that anyone can secure even if you purposefully choose to anonymize yourself or — in the case of an emergent care situation — are simply unable to provide identity credentials.

But here’s another way that a national patient ID is like Donald Trump. We are fed up with the status quo. We struggle for a way to achieve the promise of unencumbered health information exchange. We’ve invested millions, more likely billions of dollars into the systems and exchanges that are supposed to support data liquidity and yet we still stumble over the seemingly simple matter of accurate patient identification and record matching. We are fed up and we aren’t going to take it any more! We demand action!

As a result, the promise of a national patient ID takes the spotlight and many cycles are spent touting this concept as the deliverance we need. If only the federal government would get its act together and those pesky privacy advocates would quit proclaiming doom and gloom.

However, the truth – as is typically the case – lies somewhere in between.

A national strategy and design for health information exchange that considers the unique challenges of patient identity and record matching is required. The ability for a patient to manage his or her own credentials if they wish to promote or even prevent exchange is necessary. Ultimately, we need a design that doesn’t rely solely on a set of individual attributes to properly identify or match the patient (I refer to the oft-cited “Maria Garcia in Harris County, TX” study.)

We need visionaries at the table who understand the nuances and challenges and can chart a new path forward. We need to be looking at the role of existing forms of patient identification such as insurance cards, driver licenses, passports, smart cards, and biometrics to assist in the process. National identity standards and concepts such as OpenID and NIST’s Levels of Assurance are paramount to the design. Finally, peer-reviewed pilot studies that reveal the strengths and weaknesses of different approaches will help ensure the best ideas rise to the top.

Catherine Schulten is director of product management with LifeMedID of Citrus Heights, CA.

View/Print Text Only View/Print Text Only
August 24, 2015 Readers Write 1 Comment

Readers Write: Connecting Mobile Health and Alarm Safety Strategies

August 24, 2015 Readers Write No Comments

Connecting Mobile Health and Alarm Safety Strategies: A Guide for Hospitals Managing Mobile Alarms and Alerts
By Mary Jahrsdoerfer, PhD, RN


As The Joint Commission’s National Patient Safety Goal on alarm safety inches closer to the January 1, 2016 compliance deadline, hospitals are discovering that long-term, meaningful reductions in alarm-related patient safety risks extend beyond medical device alarms. Although hospitals can satisfy TJC’s alarm safety deadline by presenting a solid strategy for reducing medical device alarms alone, there is an implicit understanding that managing patient monitors and ventilators are only part of a much larger problem related to clinical interruption fatigue.

In addition to medical devices, a comprehensive clinical communications strategy also includes managing the alerts (nurse call, EHR, labs), text messages, and mobile phones/devices that care team members use to facilitate collaboration around any of these patient events. A hospital should certainly follow guidelines that advise changing monitoring leads more often, implementing patient-specific monitoring thresholds, and configuring alarm delays, but these clinical interruptions only target a subset of the overall problem.

Clinical interruptions occur when a nurse continues to receive alarms and alerts while performing a patient-related task that could have escalated to another available caregiver with an integrated platform. The interruption may be an actionable, or even a critical event, but it’s still an interruption if the recipient is unable to respond with the sense of urgency required. Nurses have described frightening scenarios where they were engaged in administering life-saving treatment for one patient while an urgent alarm for another patient blared in the background. This situation could have been easily avoided with automatic escalation of that alarm to the next available nurse.

Preventing alarm collisions requires a holistic approach to managing clinical communications that must necessarily include the full spectrum of patient events. The challenge is integrating each system in each unit without overwhelming clinical users. Assimilation requires collecting input from affected users, measuring alarm and alert activity, and ensuring the right workflow.

The Joint Commission has provided a starting point for hospitals that are serious about reducing alarm-related patient safety risks. Middleware is the foundation upon which medical device alarm management is built — hospitals must utilize an FDA-cleared platform to deliver alarms to recipients on mobile phones. A long-term alarm safety strategy includes integrating all of a hospital’s clinical systems, which will require planning beyond TJC’s NPSG deadline.

The overall goal of TJC’s alarm safety goal is to reduce medical errors as it relates to medical device alarms, but nurses realize that the broader issue of interruption fatigue is a consequence of many workflow and communication inefficiencies. My admonishment for hospitals grappling with the alarm safety mandate, HIPAA compliance through text messaging, nurse call and EHR alert management, and smartphone and mobile phone deployment is to view them as subsets of the same communication architecture that require a common foundation to solve.

Mary Jahrsdoerfer, PhD, RN is CNO at Extension Healthcare of Fort Wayne, IN.

View/Print Text Only View/Print Text Only
August 24, 2015 Readers Write No Comments

Readers Write: Interesting Times for ePA and EPCS

August 12, 2015 Readers Write No Comments

Interesting Times for ePA and EPCS
By Connie Sinclair, RPh


These are interesting times in the e-prescribing world. Readers may recall that NCPCP approved the SCRIPT electronic prior authorization (ePA) transaction standard in July 2013. Several state legislatures have passed laws requiring support for the NCPDP ePA transaction 24 months after its adoption, which brings us to right now. With Missouri’s new e-prescribing of controlled substances (EPCS) rules becoming effective July 30, EPCS is allowed in all 50 states and DC, with only Vermont holding out on EPCS of Schedule II substances.

Having been involved with electronic prescription applications for almost 30 years and today tracking what states are legislating and regulating, it is gratifying to see the tremendous progress the industry has made in terms of adoption and in advancing e-prescribing to become the standard of care. Even EPCS, which was considered one of the biggest hurdles, is now legally a done deal.

EPCS not only provides better tracking and deterrence of controlled substance diversion and abuse, it also helps patients get the medication they need in a more timely manner. Orthopedic patients no longer have to hobble into their surgeon’s office when they need pain relief prescriptions. Adults and children who are on ADHD meds can also avoid unnecessary trips to the doctor for maintenance med prescriptions.

Some might say, “Whew, we’re done.” Not so fast. Although EPCS is allowed and transaction flow for controlled substance prescriptions is certainly increasing, we still have a long way to go to get adoption to levels that are equal to non-controlled substance prescribing. Anecdotal evidence suggests that many practices are unaware of the legality of EPCS, most likely because they do not yet have access to an EPCS certified version of their EHR. Also, many states are watching New York very closely to see the impact of mandatory e-prescribing and EPCS effective March 2016, and some are expected to follow suit.

While EHR vendors have been slogging their way through MU requirements, industry stakeholders, prescribers, standards organizations, and legislators have been busy advocating for prior authorization (PA) reform. As I am sure most HIStalk readers are aware, the traditional and cumbersome PA process is a huge sore spot for prescribers and patients alike who believe it hinders patients from getting needed medication per their doctors’ orders. Over the last few years, state legislators have taken note and have approved new laws requiring reform and automation of the process. 

A big part of my current job is to monitor state law. Seven states had July 2015 effective dates for some level of support for electronic prior authorization. In addition, four states already require electronic submission of the PA form and three additional states have laws on the books with future effective dates regarding ePA support. As always, the devil is in the details as each state has a different interpretation of what constitutes an electronic prior authorization. Most states impose the requirement to support ePA on the health plan, but EHR vendors should take note because at least two states impose the requirements on the provider.

As patients and as caregivers for patients as well as EHR stakeholders, we should all be encouraged by the progress of the ePA and EPCS initiatives and do what we can to keep things moving along in the right direction.

Connie Sinclair, RPh is director of the regulatory resource center of Point-of-Care Partners of Coral Springs, FL.

View/Print Text Only View/Print Text Only
August 12, 2015 Readers Write No Comments

Readers Write: The Bon Secours Health System Convenes to Review the SAFER Guides

July 29, 2015 Readers Write 4 Comments

The Bon Secours Health System Convenes to Review the SAFER Guides
By Patricia P. Sengstack, DNP


Patient safety – have we fixed that yet? Apparently not. Fifteen years after “To Err is Human” was published, we still see errors leading to adverse events in our healthcare settings.

So let’s rely on health IT to take care of the problem. Hmmmm…. It seems that health IT can actually lead to new types of errors when not configured or implemented well. I liken it to a game of Whack-A-Mole. As a new error arises attributed to health IT, we change the system or a process to make it go away. Then a new one that we hadn’t considered pops up that we have to address: orders are written on the wrong patient, a default value is provided for a medication that is inappropriate for a patient in renal failure, a result from an outside lab is manually transcribed incorrectly into a patient’s electronic record.

As we deal with each issue, we hope to become a learning health system, continuously improving to ensure our patients get the best and safest care possible. In looking for resources to support continued safety improvement efforts, we see tools emerging from our industry experts and researchers.

One such tool can be found on ONC’s website and is a collection of nine self-assessment checklists covering safety related areas such as patient identification, system-to-system interfaces, CPOE with CDS, and high-priority practices. These SAFER Guides are available on the ONC website. If you’ve read the recent Sentinel Event Alert (#54) published by The Joint Commission, you know they recommend that organizations develop a proactive, methodical approach to health IT process improvement that includes assessing patient safety risks using tools such as the SAFER Guides.

To do just this, a multi-disciplinary team from across the entire Bon Secours Health System convened to perform a self-assessment and determine areas for health IT safety improvement using the High-Priority Practices SAFER Guide. We wanted to see what this guide was all about and decide if we wanted to move forward with reviewing the other eight guides.

The High-Priority Practices guide consists of 18 evidence-based recommended practices and includes examples of how successful organizations have improved patient safety in each area. A rating scale for each practice is provided that allows organizations to identify areas of vulnerability and to help prioritize follow up activities. These ratings include Fully Implemented In all Areas, Partially Implemented in Some Areas, and Not Implemented.

Since this was the first exposure to the SAFER Guides for almost everyone gathered in the room, our intent was not to create a to-do list with assigned resources for follow up, but simply to review the guide as a group of stakeholders to understand their intent, how to use them, and determine next steps. We had about 25 people in the room that represented clinical, IT, informatics, and patient safety from our entire 14-hospital system.

We started with a discussion on recommended practice #1, “Data and application configurations are backed up and hardware systems are redundant,” then moved on to the next one, and so on. Every single recommended practice generated at least 20 minutes worth of discussion – all good. We only got through recommended practice #11 when time ran out.

Not one of the recommended practices was scored as Fully Implemented in All Areas, but some were almost there. Those were the shorter discussions. We found ourselves wishing that there was another ranking in the scale. If just about everything is “partial” without any differentiation of “partiality,” then it’s hard for an organization to prioritize which partial recommendation to tackle first, second, third. In other words, if we checked off everything as Partially Implemented, where do we focus?

I believe the group felt that the guides were validating. Never before in one place have they seen the importance of their work in black and white with references in a concise checklist. They may have heard that a particular practice was the right thing to do, but having it in this tool provides the necessary focus on things that sometimes get pushed to the back burner for system enhancements that are a bit more sexy and innovative. The list below represents highlights from our self-assessment discussions as well as some questions generated. These will help us to provide focus over the next several months:

  • Backup systems are currently adequate. In process of moving some backup systems to a more remote location.
  • Every downtime is different. If you’ve survived one downtime, you’ve survived one downtime.
  • We need more practice at downtime – decision making, communication, and improvements to downtime forms. If only interfaces are down, should we take the system completely down for all users?
  • Where appropriate, we need to ensure we are using SNOMED/LOINC terminologies, need to assess. Are there free text areas that could be coded?
  • Some of our naming conventions in radiology are unclear, making order entry problematic and error prone. We need to review and make improvements.
  • How much do we police physician use of evidence-based order sets? Do we force their use without exception?
  • Pharmacy build team embraces ISMP guidelines.
  • How do we get our vendor to help us make improvements using this guideline? They should be at the table with us during the next discussion.
  • End user acceptance testing as well as production validation testing are happening, but think we can improve. Problems occur when using test patients in production. (Do not assume there are no real patients in the system with the last name “Test”).
  • We strongly recommend using the patient’s picture for identification. If the system allows it, we should implement (and we have started in our inpatient settings).
  • Usability of the system can be improved. Some of the language is not clear to the end-user, making it misleading while charting. Need more inclusion of end users at both the vendor and organization level during design sessions.
  • We need to develop a “Top 10 Optimization List” based on our safety review.
  • Need better method to assess end user proficiency in order to develop effective, ongoing training programs.

At the end of the session, the group wanted to set up times to complete the remainder of the recommended practices in the High Priorities guide and then move on to the Organizational Responsibilities guide. We have the next date scheduled and will continue our review.

At no other time in our organization’s history have we convened to solely discuss health IT safety. This exercise using the SAFER guide has provided the impetus leading to valuable discussions that are only the beginning of this journey to improved patient safety.

Patricia P. Sengstack DNP, RN-BC, CPHIMS is CNIO of Bon Secours Health System of Marriottsville, MD and immediate past president of the American Nursing Informatics Association.

View/Print Text Only View/Print Text Only
July 29, 2015 Readers Write 4 Comments

Readers Write: My EHR Vendor is Losing Market Share – What Should I Do?

July 29, 2015 Readers Write 1 Comment

My EHR Vendor is Losing Market Share – What Should I Do?
By Jason Fortin


These are turbulent times for many EHR vendors. In fact, according to a 2014 report from KLAS, only three vendors – Epic, Cerner, and Meditech – gained hospital market share in 2013; everyone else lost more hospital customers than they won.

What should you do if your EHR vendor is one of the many that is losing market share?

Understand the market dynamics. The reality is the EHR market is shifting quickly right now, with rapid consolidation and distinct winners and losers. A number of vendors are losing customers, but there are many reasons hospitals and health systems decide to change their core EHR. Some of the shift in EHR market share is due to justified concerns about the long-term viability of certain vendors, but increasingly, it is also a result of other factors, such as recently-merged hospitals and health systems looking to align on a single EHR.

Ask the tough questions. Go beyond the headlines and try to determine why your EHR vendor is losing market share. Are these things that can change? For example, is the loss of customers a result of the vendor’s lack of executive leadership and vision? Or is it more due to the current features and functionality of the product?

It is also important to look at what types of customers the vendor is losing and how fast the attrition is happening. Are clients being lost only in a specific segment outside the vendor’s target market (such as smaller community hospitals or large AMCs)? Or are all types of customers looking to switch?

Lastly, evaluate the level and immediacy of risk. Is the loss of market share so severe that the vendor could go out of business in the next one or two years?

Don’t panic, but evaluate if your needs are being met. Look at all the factors involved. Even if your vendor is losing market share, consider how their product specifically supports your business and clinical needs right now. Do they have a clearly defined plan to support your business and clinical needs in the future?

Also consider what your vendor offers in the context of what it will take to stay competitive in your market. For example, “interoperability” is an important characteristic, but it is far more important to have a system that can exchange discrete data with the specific EHRs that are predominant in your region.

Take an objective look at the alternatives and make a decision. Evaluate the market, looking at other core EHRs as well as applicable niche solutions to get a sense of different approaches to functionality that is most important to you (i.e. data exchange, population health, etc.) Compare those to your current EHR and be honest in terms of which capabilities represent a significant improvement over what you have, which are essentially a trade-off, and which might be nice to have but aren’t critical to achieve your specific business and clinical goals.

If you decide to leave your vendor, carefully consider your options for selecting a new one. One course of action is a full system selection, which involves a thorough and comprehensive look at multiple solutions (including detailed demos and interviews), but may not be practical from a timing perspective or in cases when a replacement is urgently needed. An alternative option is a “null hypothesis” selection. This approach is focused on starting with the best potential fit based on your scan of market leaders, and then undergoing an expedited selection process with that one “null hypothesis” vendor to try and disprove why it would not be a good EHR for your organization.

The bottom line is loss of market share is a valid reason for customers to be concerned about their core EHR vendor. In some cases, it is sufficient cause to begin looking at a potential replacement. But it is also important to look at why a vendor is losing customers and to objectively look at your current system and the alternatives in the context of what your organization will specifically need to remain competitive in your market. Committing to an EHR vendor is a big decision, and unfortunately in the current landscape, it is not a decision hospitals and health systems can afford to get wrong.

Jason Fortin is senior advisor with Impact Advisors of Naperville, IL.

View/Print Text Only View/Print Text Only
July 29, 2015 Readers Write 1 Comment

Readers Write: Meet Generation Z

July 29, 2015 Readers Write No Comments

Meet Generation Z
By Frank Myeroff


The next generation to enter the workforce has been coined “Generation Z” or “Gen Z.”  Gen Z refers to the group of people born after the Millennial Generation.

There is no agreement on the exact range of birth dates. However, according to Wikipedia, some sources start this generation at the mid or late 1990s or from the mid-2000s to the present day. Right now they comprise about 7 percent of the workforce, but by 2019 it is estimated that 30 million will be employed.

As the father of two Gen Zers, I can tell you that not only is this generation the most digitally connected, but they have no concept about life before the Internet, mobile devices, digital games, or iTunes. This screen-based generation utilizes technology as a tool to communicate, share information, be entertained, receive and complete school assignments, obtain breaking news, and so much more in every aspect of their lives.

What do we as HIT executives and hiring managers need to know about Gen Z’s arrival in the HIT workplace?

  1. Expect leadership to be transparent. Because Gen Z knows the power of sharing and openness, they want leaders to be honest and forthcoming. There will be no place to hide for inept leaders.
  2. Expect leaders to provide immediate results. Gen Z is used to real-time information and moving at a fast pace. They want leaders to offer exposure to new HIT projects as well as show them how to attain a high level position in a short period of time.
  3. Plan on entrepreneurial spirit. Seventy-two percent of Gen Z expects to create and run their own startups at some point in their career. This means heavy competition. Organizations will not only have to compete against each other for talent, but against entrepreneurial startups.
  4. Derive possible cost savings. Expect a savings by hiring Gen Z. Since they’re transient and want to work remotely from any location in the world, you’ll probably save on office space, infrastructure, and relocation costs.
  5. Anticipate faster and easier access to healthcare. From my perspective and their use of technology, Gen Z knows that faster and easier access to healthcare is all about the adoption of emerging technology. They will expect better technical assistance and training and the adoption of HIT best practices in order to transform access to American health care. In addition, Gen Z will be demanding a higher quality of infrastructure and efficiency of operational systems in order to adopt systems that provide better quality of patient care.
  6. Expect higher education. For the most part, when talking to Gen Z, they plan on traditional college careers, but it’s as much for the social benefits and networking connections as it is for honing IT skills. After graduation, most plan to gain higher education and many plan to accomplish this through online learning.
  7. Plan for idealistic generation. They want to change the world, feel that their work in the HIT profession is of value to society, and love the idea of volunteer work, which many are already doing.

As more information about Gen Z emerges, it’s most interesting how they differ from other generations, including the Millennials. What will it take to attract and retain Gen Z HIT Professionals?

  1. Create a young professionals employee group. Starting an employee group for Gen Z will engage and empower these individuals to become future leaders by providing personal and professional development opportunities. Within this group, encourage networking and civic involvement.
  2. Provide the latest and best technology. Gen Z is accustomed to having the latest and greatest technology. They’ve been raised on smartphones, laptops, desktops, iPods, etc. and using multiple screens are the norm. Therefore, to get their attention and keep them happy, continuously invest in new technologies and provide Gen Z with the tech tools that will engage them and make them more successful.
  3. Provide a career path that is tailored to them. As we know, the HIT industry is exploding, which is creating all kinds of employment opportunities. In order to attract and retain Gen Z, offer them a broad range of areas within your organization where they can specialize and succeed. Think about tailoring positions that leverage Gen Z’s quick adoption of technology and their desire to move up quickly.
  4. Expand flexible work hours and remote connectivity. As the tools and technology evolve, make it part of your culture to allow remote participation in meetings. Think about embracing Web-based video conferencing and online meetings if you haven’t already.
  5. Offer coaching and mentoring. Gen Z expects your organization to offer formal coaching and mentoring programs. They will especially need training in interpersonal skills and communication.  They are so accustomed to communicating through the use of technology that most could use pointers on how to have an effective face-to-face dialogue.
  6. refresh your rewards and/or recognition programs. Gen Z professionals need more rewards and recognition programs than any other generation. They look for accolades on even minor accomplishments. You will need to reward often and keep changing the rewards program to keep up with their expectations.

Generation Z is quickly approaching and they’re ready to live and compete in the digital world like no other. This technologically savvy and extremely innovative generation feels that they can achieve anything and they will expect your HIT organization to support them and provide growth opportunities or risk losing them.

Frank Myeroff is president of Direct Consulting Associates of Cleveland, OH.

View/Print Text Only View/Print Text Only
July 29, 2015 Readers Write No Comments

Readers Write: WikiLeaks for Healthcare

July 16, 2015 Readers Write 11 Comments

WikiLeaks for Healthcare
By Todd D. Johnson


Did you feel the earthquake that hit healthcare this week? If not, you weren’t paying attention. On Monday, ProPublica, an independent, non-profit newsroom, published Medicare data about complication rates for surgeons and hospitals across the country. For the first time ever, the complication rates are reported all the way down to the individual surgeon.

This is big, sort of a WikiLeaks for healthcare. ProPublica has also used the stage to call out some of historically best-regarded healthcare institutions in the world on their outcomes.

Let’s agree that the train of transparency and value based healthcare has already left the station. If you didn’t believe it last Wednesday — when the Secretary of HHS, Sylvia Burwell, sent a strong market signal accelerating the movement requiring physicians to provide a “warranty” for their services — believe it today. Now any individual (patient or referring physician) can (and should) look up complication rates by surgeon and by hospital as they are shopping for doctors.

Furthermore, any payer, medical malpractice carrier, and any physician-employing entity can use the same data to negotiate reimbursement, premiums, and employment terms, respectively. This changes everything.

Arguably, #SurgeonScorecard is simultaneously both the best thing and the worst thing that could have happened for our healthcare system right now. Any physician or hospital that somehow thinks that the market economics aren’t going to shift needs to wake up. They can neither ignore nor hide from these data. Even if payors aren’t going to mandate change, patients ultimately will, and both patients and payors now have the tools at their disposal to do so. Furthermore, this data is public, and physicians and/or hospitals no longer have the only key to the safe containing physician-level outcomes.

Like every empire, the days of Fortress Medicine are now numbered. Those provider organizations that can find and use new tools to help them learn, improve, adapt, and evolve will survive, and those that don’t will succumb to market forces. Ultimately, this is the best thing that could have happened because it will lead to greater patient safety and improved outcomes.

It also might be the worst thing that could have happened to healthcare today. Change is difficult for many of us, including physicians, and therefore it’s hard to adapt and improve. Really hard. Physicians have lost control of their practices and their data. Just ask them. The data held in EMRs and claims data sets are not at the fingertips of the very practitioners who need them.

Furthermore, the retrospective data are being used by third parties to tell physicians about their performance and about how much or little they may be entitled to for reimbursement. This is to say that the data are often used to hold the providers hostage.

Physicians need new tools to empower themselves by using their own data. These tools must enable them to proactively treat patients in more high performance ways at lower costs. Just like ProPublica uses big data to learn about populations, physicians need to use their own data to gain more insight, work more efficiently, get better outcomes and measure them. But their data is are either sitting behind the walls of Fortress Payor or sitting in EMRs (otherwise called “wait-a-bases.”)

Just ask a doctor how easy it is to get data from his or her IT department these days. Most physicians haven’t the slightest idea how to access their own data. This is an unacceptable situation.

How can we ask providers to improve when they are flying blind? You may ask, “But didn’t we just spend $40B of taxpayer money to subsidize the purchase of fancy EMR systems to help providers improve performance?” Sadly, the flaw is that those EMRs can’t tell them which patient is sitting at home three days after surgery with a fever and early signs of an infection. That EMR isn’t telling them that their patients aren’t performing their daily at-home exercises to reduce their risks of blood clots. That EMR isn’t telling them that the patient isn’t understanding their treatment plans and adhering to them. At best, that EMR is only telling them that the patient was admitted to the emergency room after developing an infection.

As its title indicates, the EMR is merely a “record,” of that which has already happened rather than a live stream of what is happening, or a forward-looking tool for what is likely to happen. But more than that, EMRs only “know” the events that occur within the system in which they record.

Research out of the University of California San Francisco, reported last month in the Annals of Internal Medicine showed us that within three days of an emergency room visit, one-third of all return visits happened at another institution. This is worth repeating. One-third of all patients returning to an emergency room within three days of an initial visit occur at a the emergency room of a different hospital.

This means that there is tremendous, costly leakage outside of the walls of the index institution, and the events (and associated costs) around that leakage are unknown to the physician until the claims data (with reduced reimbursement) appear at the door. Just as patients are gaining increasing access to their own data, so too is there a need among providers and provider organizations for self-directed visibility into their own data.

Now is the time that physicians need to challenge themselves and their organizations to embrace value-based agreements. They should advocate for transparency; not hide from it. Some providers and provider organizations will undoubtedly feel victimized by these new and very transparent scorecard data. But those who see the opportunity will realize that with the right tools in place, these data can be extraordinarily empowering.

Use of digital engagement platforms with real-time patient reporting that is built right into the physician workflow can empower healthcare organizations and providers to discover and glean insights like never before about their patients. Furthermore, healthcare organizations needn’t wait to be told by payers what their complications are (and correspondingly what their value-based reduction in reimbursement may be) when they can glean from other digital sources exactly what their complication rates are in real-time.

The Surgeon Scorecard is a wake up call for providers to empower themselves to control their own data and not be victims of it. This is the time that they should critically evaluate and invest in new ways to deliver care that leverages the latest digital health tools, remote monitoring, and data analytics.

Years from now, we may well find ourselves referring to the new era after this pivotal moment in healthcare as “Life after the Surgeon Scorecard.” Really, the surgeon scorecard is just the beginning. Next, we will see similar reports extending beyond the eight elective surgeries covered in the ProPublica article. After that, we will see similar data reported for physicians in non-surgical specialties.

Just as it was the consumer market that has been dragging Fortress Medicine into the digital health era, so too is consumer demand for transparency about physician performance dragging data such as those reported today into full view. But physicians themselves are the ones at risk of being left behind. They must become proactive rather than reactive consumers of their own data, adopting and utilizing any number of the emerging workflow-friendly digital health platforms that put the data right into their hands.

Todd D. Johnson is chief executive officer of HealthLoop, Inc.

View/Print Text Only View/Print Text Only
July 16, 2015 Readers Write 11 Comments

Readers Write: How Healthcare Providers Can Get Paid in the Mobile Age

July 8, 2015 Readers Write 3 Comments

How Healthcare Providers Can Get Paid in the Mobile Age
By Tom Furr


Two-thirds of all Americans aged 18 to 29 and nearly 60 percent of those between 30 and 49 years of age use a smartphone, according to a recent study by the Pew Research Center. In addition, the study found about 30 percent of Americans perform banking tasks – like paying bills – via their smartphones.

What does that have to do with your medical practice, you may ask? How well you understand the dynamics of mobile technology and its use in our society has a bearing on your practice’s survival. The management consultancy Deloitte noted that “overall preferences are trending toward mobile use” as it relates to getting information, buying, and paying for things. We can add paying for healthcare.

If there has ever been a reason to finally abandon that creaky old paper-based billing system, it is the ubiquitousness of mobile devices: smartphones, tablets, and even basic mobile phones. Most sources cite 90+ percent of Americans own a cell phone.

Americans prefer to get their bills online and are far more likely to pay them quickly, if not immediately. If you’re sending statements out in paper form, the third time is truly the charm. The Medical Group Management Association calculated that doctors’ offices must send out more than three statements before receiving any payment for services provided.

It’s high time you stopped licking stamps and start to bill electronically with email alerts sent to your patients. If you’re already using some kind of online bill pay method, understand your patients are moving away from the desktop to mobile devices. Adestra, an online marketing firm, found 48 percent of email opens occurred on mobile, 36 percent on desktop, and 19 percent in a webmail client.

Litmus, an email testing and analytics company, reported earlier this year that more email is read on mobile than desktop email clients. It, too indicated about half of all emails are opened on a mobile device. Of the 900 million Gmail users worldwide, 75 percent use their accounts on mobile devices

Campaign Monitor, another email specialist, noted that mobile email opens have grown 180 percent in three years, going from 15 percent in Q1 2011 to 42 percent in Q1 2014.

The changes that have occurred to this country’s healthcare ecosystem in just the last three years have had — and continue to have — profound impact on every person touched by the industry.

The increase in patient responsibility – or should I say liability – as it regards debt has created unprecedented revenue pressure on doctors, clinics, and hospitals. Oddly enough, this intense pressure has not prompted a swift change in most healthcare providers’ mode of operating. A study by JP Morgan noted healthcare providers have been late to turn their focus from clinical applications to their revenue cycle, collections, and payment processing modules. What’s more, this research determined healthcare providers “need to interact with patients in a more direct collections relationship” but “are not providing the level or sophistication of payments services that consumers expect.” This study also observed “the healthcare industry, as a whole still transacts with high volumes of paper.”

Six years ago, a McKinsey survey of retail healthcare consumers showed that 52 percent of respondents would pay from $200 to $500 or more by credit or debit card when they visit a physician if an estimate was provided at the point of care. It appears consumers are not so much unwilling to pay as they are unwilling to pay blindly.

Your patients are telling you what to do. Make payments more convenient and less confusing. Start by moving from paper to electronic and on to mobile

Whether you go the route of email to a secure website or a mobile application, recognize you’re not dealing with a screen more than a couple of inches wide and maybe three or four inches long. More than being “mobile friendly,” your efforts here need to show you’re mobile savvy.

Everything you do for the mobile environment must be simple and with a clear purpose. Simple because there are some technical limitations the wireless infrastructure forces us to handle. Clear because the viewing area is not very big. Intuitiveness is a must. One reason e-retailers are seeing a bump in abandoned shopping carts is their sites and apps aren’t developed with mobile in mind first.

Get the right message presented in the right way to your patients and they will see it on their phones and take action right then. After all, in this mobile age, people check their phone about 150 times a day. It’s how they operate.

Tom Furr is founder and CEO of PatientPay of Durham, NC.

View/Print Text Only View/Print Text Only
July 8, 2015 Readers Write 3 Comments

Readers Write: Building Pillars of Success on a Foundation of Failures

July 1, 2015 Readers Write 9 Comments

Building Pillars of Success on a Foundation of Failures
By Randall N. Spratt


As the days fly by toward my retirement later this year, I’ve spent some time reflecting on my 40-year career in information technology. It feels like just yesterday I was receiving my diploma from the University of Utah, eager to jump into my career and make my mark. As college grads begin to enter the workforce, I hope that sharing my path and insights may help them build the foundation of their own leadership aspirations.

I started my technology career as a junior Fortran 77 programmer. I was good — I mean really good. I could write 10,000 lines of code without ever writing down an idea. I could produce a bug-free, error-free compile the first time. I was so good that I was quickly promoted to manager. However, it turned out that being a good programmer did not mean that I was a good manager.

On the brink of retirement, when I look back at my career, I realize that I built pillars of success on a foundation of failures. In my first management position, as a programmer, I would tell everybody how to program. When they failed, I would just do it for them.

I found myself working harder and being less effective because I wasn’t managing — I was doing. Somewhere along those first few management jobs, I had my first ah-ha moment: it was my job to deploy resources to help people do their jobs, not to tell or simply do.

Strong leaders know when to let go. They are effective in sharing a common vision with others and they make conscious — and sometimes difficult — decisions about what they do with their time.

As a programmer, I had 100 percent control over what I did at work. Every single line of code came out of my hand. No one else had anything to do with whether or not the program worked. Now, as a CIO and CTO, I have absolutely no control over anything. It has been a steady process of learning to relinquish control and replace it with influence and coaching while providing opportunities to collaborate as a team. 

It took me some time to realize this, but as soon as I did, it immediately strengthened my management skills and things got a lot easier. Eventually, I began to spend more of my time traveling to our customers’ locations to install laboratory information systems. While on site, I gained a better understanding of the customer’s needs. I realized that what I was installing wasn’t necessarily what our customers wanted. To help solve this problem, I wrote more code. I felt that I knew what the end users wanted better than anyone else in my own company.

Once again, I began to fail because I took my eye off of the job of management. I was now a manager of managers. My job was to make sure that our customers were well served and that their voice was heard. The answer wasn’t to write more code — the answer was to relay information gleaned from the customer to the groups I managed so that we shared a common vision, a common set of goals, and a common understanding about what we were trying to accomplish for the customer.

It was very time consuming. The more responsibility I got, the more work there was to do, the more people there were to talk to, the more relationships there were to build, the more details there were to cover, the more people there were to appraise, the more raises there were to give.  Everything took more and more time.

This led to my second ah-ha moment: work is part of life but, for some people, work is life. My career and leadership path would depend on how well I knew myself and how I decided to spend my time.

No matter where we are in our careers, we all have one thing in common — we have only 24 hours in every day. No more, no less. After choosing to spend some number of those hours asleep, our paths diverge. We choose when we wake up and we decide what to do once we’re awake. Some of us wake up earlier and choose to go running, while others start later and sit with the paper and coffee. Some fire up email, some talk to a spouse or a friend. But each one of us makes choices about how to use our time.

At that point in my career, I discovered I would never understand the term work-life balance. It is not about balance, it’s about choices, decisions, and how you choose where to spend your 24 hours. Sooner or later you are going to be faced with tradeoffs and decisions. You can’t be a top developer or a CIO of a company and think that you’re still going to service every hobby, every person, and every relationship in your life in the same way. 

I created the time to be a leader in my field and I often had to give things up. Throughout the years, I gave up sports and many hobbies. As I began to have children, I chose to spend more time with my family and gave up time with friends. These choices were made consciously, with a deep knowledge of myself and a realization that although I was letting go of some things, I was gaining others.

As I look back at my career, I can recall many choices — some lucky, some wise, some painful, and some necessary. Writing code was easy — just me and the keyboard. The results spoke for themselves. Cultivating the skills to become a leader was much more subtle and nuanced, but in many respects, far more rewarding.

Randy Spratt is CIO and CTO of  McKesson Corporation.

View/Print Text Only View/Print Text Only
July 1, 2015 Readers Write 9 Comments

Readers Write: How to Sell to MD Anderson

June 17, 2015 Readers Write 2 Comments

How to Sell to MD Anderson
By Niko Skievaski


Last Wednesday, I had the pleasure of attending MD Anderson’s IS Vendor Summit in Houston. Imagine a room of 200 enterprise sales executives at the edge of their seats listening to how MD Anderson’s transition to Epic may or may not affect their prospects with the world’s largest cancer center. The usual conversations were accented by beads of sweat organizing in military formation on the tips of noses, bayonets at the ready.

CIO Chris Belmont and his team transparently outlined how they plan to transform the patient care experience. Their vision includes the concept of bringing the patient’s overall experience up to par with the world-class care that patients expect. This is along the lines of Branson’s "Virgin Way," in that the service experience begins when a customer starts thinking about your product and not simple when interacting with it.

From the cancer center’s perspective, this experience starts when a patient is diagnosed and gets home to Google for the best place to get treatment. It continues through each encounter at the hospital, including driving directions, parking, way-finding, and waiting rooms. After the treatment (which is the actual product), the experience needs to go home with the patient as they transition to becoming a survivor.

The good news for us: this will take a lot of technology and most of it falls far outside the functionality provided by the EHR. Jeff Frey leads up the Digital Experience and has taken on the role of the true cowboy at the organization. When the room was asked, "Who in here hasn’t worked with Jeff?" we fell silent, either because we all had or we were too ashamed to admit we hadn’t. Needless to say, Jeff and his team need to wrangle what will be hundreds software vendors into a coherent digital strategy to present a seamless experience for patients. (FYI – iPads seem to be the chosen hardware.)

This requires collaboration. That brings me to the key points of selling to MD Anderson, as I understand it. Here it is, summarized, enhanced, and optimized for effectiveness.

How to Sell to MD Anderson

Stop pitching us on how your product will save healthcare. Pitch on how your product will fit into our goals for the digital patient experience. You won’t be able to do it alone. You need to collaborate with other vendors, so talk to each other. You may be competitors on the trade show floor, but in here, you’re part of our vision. Work together and solve these problems. Don’t make us stitch it all together. Don’t give us yet another analytics dashboard — we won’t use it. Give us an API and integration plan. Your chances of landing a meeting dramatically increase with the number of vendor-collaborators you bring with you.

Anyone want to collaborate?

Niko Skievaski is  co-founder of Redox.

View/Print Text Only View/Print Text Only
June 17, 2015 Readers Write 2 Comments

Readers Write: Defining Our Terms: Does Anyone Know What an "Open EHR" Really Is?

June 16, 2015 Readers Write 4 Comments

Defining Our Terms: Does Anyone Know What an "Open EHR" Really Is?
By Dean F. Sittig, PhD and Adam Wright, PhD

image image

Adapted from “What makes an EHR “open” or interoperable?” J Amer Med Inform Assoc 2015. Available at: http://jamia.oxfordjournals.org/content/early/2015/06/13/jamia.ocv060.

There’s been a lot of talk lately about “open” EHRs, ranging from Congressional hearings to industry buzz. Last summer, Mr. H challenged his readers with, “What core set of published standards or capabilities must a given EHR support to be considered open?” We thought this was a great question, so we decided to give it a try.

First, “open” does not mean “open source.” Although open source software is of great value, an EHR can certainly be open without being open source.

We’ve also noticed that some commentators equate open with the platform software is built on, and specifically, that systems which use relational databases and support SQL (structured query language) are inherently more open than those that use hierarchical databases (e.g., Cache). We think this is a distraction, too – you can make closed systems on SQL or open systems on Cache.

Regardless of the database technology (relational, hierarchical, object-oriented), data exchange with another application requires significant effort to transform the data into an agreed-upon format with agreed-upon meaning. This transformation must take into account the data’s syntax (the format), semantics (the meaning), and pragmatics (the way the data are used in context to create a meaningful clinical application). The internal representation of the data, in either the sending or receiving EHR, is largely immaterial.

We decided to organize our definition of open around five use cases, which we refer to as the EXTREME criteria (short for EXtract, TRansmit, Exchange, Move, Embed):


An organization can securely extract patient records while maintaining granularity of structured data.

  • Secure login and role-based access controls.
  • Structured data importable programmatically into another database (unstructured formats such as PDF, do not suffice).
  • Audits of extracted records.
  • Sufficient metadata included in the extract to ensure interpretability, e.g., units and normal ranges for lab results.
  • Freely-available data dictionary indicates where data are stored and what they mean.

An authorized user can transmit all or a portion of a patient record to another clinician who uses a different EHR or to a personal health record of the patient’s choosing without losing the existing structured data.

  • Data selection methods that allow users to identify which data to include or exclude.
  • Standard method to structure data (e.g., C-CDA) or portions thereof (e.g., DICOM, e-prescribing).
  • Standard methods used to describe the meaning of the data (i.e., controlled clinical vocabulary used) Note: conversion of structured data to an unstructured format such as PDF would not meet these requirements.

An organization in a distributed/decentralized health information exchange (HIE) can accept programmatic requests for copies of a patient record from an external EHR and return records in a standard format.

  • EHR infrastructure capable of responding to queries 24 hr/day, 7 days/week.
  • Record-locator service functionality available and in use.
  • Standard method used to structure data (e.g., C-CDA).
  • Sending EHR’s data dictionary available to receiving EHR.
  • “Internet robustness principle” respected (be liberal in what you accept and conservative in what you send).

An organization can move all its patient records to a new EHR.

  • Standard method in which to structure key clinical data (e.g., laboratory results, medications, problems, admission history) provided (e.g. HL7 v2.x or v3).
  • Data dictionary used to define clinical and administrative data.
  • Existing metadata (e.g., timestamps, source, and authors) exported to the new system.
  • Transaction history of data items (e.g., renewals and dose changes for a medication) preserved.

An organization can embed encapsulated functionality within their EHR using an application programming interface (API). Goals: access specific data items, manipulate them, and then store a new value.

  • External applications have “read” and “write” access to clinical and administrative data, including metadata from the EHR (e.g., using the SMART app platform or HL7’s Fast Healthcare Interoperability Resources (FHIR) services.
  • Programmatic method to embed external applications (either code or presentation, i.e., an embedded web application, e.g., Cerner’s mPages) with which the user can interact via the EHR’s user interface without re-compiling the existing EHR’s codebase.
  • Appropriate support and maintenance to ensure that encapsulated functionality will continue to work and meet user needs following system configuration changes or upgrades.
  • HIPAA-compliant protection of newly created data item(s) (e.g., only accessible to authorized users and backed-up with all other patient data) like all other patient-related data.

These use cases were designed to address the needs of patients, so they can access their personal health information no matter where they receive their healthcare; clinicians, so they can provide safe and effective healthcare; researchers, so they can advance our understanding of disease and healthcare processes; administrators, so they can reduce their reliance on a single-source EHR developer; and software developers, so they can develop innovative solutions to address limitations of current EHR user interfaces and create new applications to improve the practice of medicine.

In addition to the specific features and functions required to implement these use cases, we also note that many developers limit access to their systems by requiring: special training and certification by the developer before users can extract data from the system or integrate an application; users to sign a non-disclosure agreement; users to pay an additional license fee to access data or integrate an application; customized programming that only the developer can do; or access to documentation that requires special permission or additional fees. While we understand that developers need to maintain a degree of control over access to their software for financial, security, intellectual property, and reliability reasons, we question whether a system subject to such constraints can be considered truly open.

In addition to these use cases, open EHRs should be subjected to stringent conformance testing to ensure that receiving systems are able to import and parse the structured data and store it in the appropriate location within the receiving EHR, while maintaining the metadata and transaction history from the sending system.

Widespread access to open EHRs that implement at least the five EXTREME use cases we propose is necessary if we are to realize the enormous potential of an EHR-enabled healthcare system. Healthcare delivery organizations must require these capabilities in their EHRs. EHR developers must commit to providing them. Healthcare organizations must commit to implementing and using them.

In addition to having all EHRs meet these technical requirements, we must also begin addressing the myriad socio-legal barriers (e.g., lack of a unique patient identifier, information blocking, high margin, fee-for-service clinical testing) to widespread health information exchange required to transform the modern EHR-enabled healthcare delivery system.

Dean Sittig, PhD is professor of biomedical informatics at the University of Texas Health Science Center at Houston. Adam Wright, PhD is senior scientist in the Division of General Medicine of Brigham and Women’s Hospital, a senior medical informatician with Partners HealthCare, and assistant professor of medicine at Harvard Medical School.

View/Print Text Only View/Print Text Only
June 16, 2015 Readers Write 4 Comments

Readers Write: The Learning Healthcare System Starts with the Vendor-Neutral Archive

June 10, 2015 Readers Write No Comments

The Learning Healthcare System Starts with the Vendor-Neutral Archive
By Larry Sitka


The Office of the National Coordinator for Health Information Technology, commonly referred to as ONC, recently released “Connecting Health and Care for the Nation, A Shared Nationwide Interoperability Roadmap (DRAFT Version 1.0).” Inside the 166-page framework description, ONC introduces the need for a platform called a Learning Health System, which it defines as “an environment that links the care delivery system with communities and societal supports in ‘closed loops’ of electronic health information flow, at many different levels, to enable continuous learning and improved health.”

The ONC document is designed to be a 10-year roadmap that describes barriers to interoperability across the current health IT landscape, including a description and proposal for a desired future state of healthcare IT. It introduces an architecture overview for a learning healthcare system and what is required of such a system.

In the report, ONC states that “by 2024, individuals, care providers, communities and researchers should have an array of interoperable health IT products and services that support continuous learning and improved health. This ‘learning health system’ should also result in lower health care costs (by identifying and reducing waste and preventable events), improved population health, empowered consumers and ongoing technological innovation” through coordinated care plans.

The report states that in the future, “all individuals, their families and health care providers should be able to send, receive, find and use electronic health information in a manner that is appropriate, secure, timely and reliable. Individuals should be able to securely share electronic health information with care providers and make use of the electronic health information to support their own health and wellness through informed, shared decision-making.”

While the vision and future state put forth by the ONC is sound, as healthcare professionals, we must ask ourselves, “Where do we begin?” and, “What can we do today to begin reaping some of the benefits of interoperability and providing the foundation for the next 10 years?”

As with any technology revolution, certain technologies mature faster than others and begin to provide a glimpse of the future landscape. In the case of interoperability, the vendor-neutral archive (VNA) is a mature technology that is already playing a leading role in evolving the current healthcare ecosystem toward a learning healthcare system and providing a means for real-time healthcare delivery.

The foundation for a learning healthcare system is the basis of what a VNA provides today. Leveraging and thinking of a VNA as merely an imaging storage tool is shortsighted. Why not envision the VNA as providing the pathway and functionality for a patient-centered healthcare discovery tool? The VNA already has the capability to provide an IT interoperability framework that enables many applications to work in unison to learn the context of a patient, inside or outside the current healthcare organization. By leveraging a VNA in this context, suggestive results can be provided to the healthcare organization’s clinicians, physicians, and, most importantly, the patient in a passive or real-time manner.

The VNA is an effective means for improving patient outcomes through interoperability and for moving healthcare organizations beyond the traditional product sell. The ONC report states, “Consumers are increasingly expecting their electronic health data to be available when and where it matters to them, just as their data is in other sectors. New technology is allowing for a more accessible, affordable and innovative approach. However, barriers remain remain to the seamless sharing and use of electronic health information.” The VNA has all the elements necessary to establish a learning health system foundation.

In the construction of a building, every project begins with the foundation. A solid and stable foundation is critical and must be carefully planned. It is the most difficult structural element to change. The foundation of a learning healthcare system is built around two key components—patient context and the healthcare delivery organization (HDO) context. Taking ownership of the data and focusing on HDO interoperability through standards are essential pillars that must be cemented into this foundation.

From an HDO perspective, ownership of clinical content on behalf of the patient is a mandatory requirement. An assumed role of the HDO, on behalf of the patient, is the holding of collected patient content for future use in the continuum of care. The HDO must define and build a foundation by which secure sharing of patient content is inherent. This environment must be capable of not just storing content but also dynamically finding, moving, and distributing content in real time.

This content is linked and possibly moved into a learning healthcare system independent of the organization’s affiliation. The content is either linked on demand or covertly as information is discovered, further extending the patient longitudinal record. The goal of content aggregation is to provide suggestive access to patient information for the healthcare worker who is responsible for delivering a better patient outcome. The patient outcome is the evidence by which the HDO shall be paid.

From the patient perspective, ownership of the data by the patient is now something we vendors must enable and that HDOs are legally bound to steward. HIPAA, for example, can appear to vendors as restricting and controlling. It attempts to define who and what content can be accessed along with the purpose of accessing that content. However, it is actually HIPAA that finally gives ownership of the content back to the patient. It is the first piece of legislation specifying to the HDO and its vendors that true ownership of results and supporting documentation belongs to the patient and not the healthcare organization, the insurance company, or the product vendors.

Once the foundation of a learning healthcare system is created, the framing comes next. Framing requires exact measurements and sizing using standards-based products. With the cutting and coercion of the materials comes a custom fit per the requirements in a blueprint. Such is the case of a learning healthcare system, where the HDO must begin by demanding standardization of not only structured content but also unstructured content. Standardization assures interoperability and a canonical data model that is based on industry standards and site-specific requirements, not proprietary vendor specifics. Standardization or canonicalization of the metadata to be used and exchanged in a learning healthcare system is exactly what a true VNA platform provides.

Simple problems come with very complex solutions in these cases. For example, patient names, IDs, and study descriptions have become as complex to the HDO as the Y2K problem. Can you imagine the chaos that would ensue from an IT infrastructure not based on wireless or Ethernet standards for physical connectivity? Simply put, what if we all drove on an Interstate without painted lines? What if the map we used for guidance did not include a legend?

Such is the case for the HDO when it comes to delivering a standards-based form of patient content. Of course, there are DICOM standards, HL7 standards, and the XDS framework, but HDOs must demand that vendors actually support and utilize these standards, participating in annual Connectathons to validate their ability to interoperate. More importantly, HDOs must contractually demand interoperability following those exact standards. In short, an HDO must stop purchasing solutions that are unique to its own internal, proprietary standards.

The deployment of the electronic medical record (EMR) to capture and attempt to hold unstructured content, at least inside a data warehouse application, is a step in the right direction. Unfortunately, the EMR only solves half of the problem by providing a collection point. To test this, try and share the unstructured content between EMRs and between organizations. This has become a next-to-impossible task. EMR providers that claim to be able to share unstructured content typically come up far short of expectations.


The idea of sharing an electronic record is what initially drove EMR adoption. But now we have a large volume of unstructured content that must feed the learning healthcare system. The VNA is a capable platform for achieving this goal. The chart above indicates where the VNA is already meeting three-year and six-year interoperability objectives set forth in the ONC report.

The final steps in a construction process are completed by selecting the best products, with the best look and feel, to meet the needs of the owner. Such is the case in creating a learning healthcare system, which demands the ability to select the best products and functionality to deliver the best patient outcomes. Different departments and healthcare settings, much like physicians, have different needs and requirements. Why be limited to only one selection? More importantly, don’t be forced into “one size fits all” in the selection of applications. Give HDO users the flexibility to select the applications that best suit their workflow and objectives. For example, a radiology-centric viewer will not work very efficiently for wound care or treatment planning.

When connecting the building to the outside world, each location typically has its own utility providers that are part of a grid. The same is true for a learning healthcare system, where existing healthcare information exchanges (HIEs) are the on-ramps. The HIE and image or content exchange, which are typically not profitable today, are expected to evolve into much more in the future. Difficulties often arise when seeking cooperation among different, unaffiliated organizations for patient informational access. Vendors, of course, find it difficult to build any product today around something that is not profitable, not to mention being a very difficult sell to HDO executive teams. Tomorrow’s HIE technology inside the learning healthcare system, however, will not only be a necessity but will be integral in making sure image and content exchange is included in the VNA as an embedded feature. Sharing patient content across the private sector, HIEs and government organizations will become commonplace within the next decade, all driven by patient outcomes.

But, more importantly, the business and legal perspective. The VNA selected should support an HIE inherently. An image/content exchange is a mandatory requirement of a VNA and is the basis of a learning healthcare system for moving released content in a secure manner. It is also critical that an image/content exchange within a learning healthcare system provide the business process and verification steps, including automation of steps that include BAA approval and appropriate patient release form access and approval.

The data demands of a learning healthcare system will far exceed anything an HDO has seen to date. Typically, the sizing of a VNA is done by traffic volumes requested by concurrent users, or study volumes. However, the oncoming big data analytics applications (a necessity inside a learning healthcare system) will far exceed any current traffic volumes requested by humans. A learning healthcare system will be in a continuous mode of finding, aggregating, and coercing information relevant to the patient in context. This is also a necessity to building out the patient record.

Once found, the information is persisted in the learning healthcare system whereby the analytics and other applications, including natural language processing (NLP), will access the information. NLP will give the data better context and perception around the patient, allowing the healthcare worker to have better informational access and decision processing through new clinical support applications. Support for these demanding applications will require an infrastructure that can scale on-demand, both horizontally and vertically. These applications will leverage your VNA for more than just “basement storage,” where content becomes cluttered and inefficient while never being used again.

The learning healthcare system will be an integral part of improving the way the healthcare ecosystem works and how patients, providers, and payers interact within that ecosystem. Achieving the complete vision of the learning healthcare system will be a gradual process and lessons will be learned throughout the journey. There are important actions we can initiate today, however, to begin building the necessary foundation for this vision. VNA technology is the foundational cornerstone mature enough to begin solving some of the greatest challenges and to remove some of the obstacles to a fully interoperable healthcare system.

Larry Sitka is principal solution architect with Lexmark Healthcare of Lexington, KY.

View/Print Text Only View/Print Text Only
June 10, 2015 Readers Write No Comments

Readers Write: The Internet of Things Can Revolutionize Healthcare, But Security is Key

May 28, 2015 Readers Write 3 Comments

The Internet of Things Can Revolutionize Healthcare, But Security is Key
By David Ting


The Internet of Things (IoT) holds tremendous promise in healthcare, potentially enabling a digital health revolution and support the future of care delivery.

Gartner estimates that approximately 3.9 billion connected things were in use in 2014. This number is expected to increase to 25 billion by 2020, a growth trajectory that will surely impact the healthcare industry, which is already being flooded with devices for generating valuable patient data.

However, the transformative potential of the IoT won’t be realized for healthcare unless data integrity and security are built into the foundations of the IoT movement.

The IoT’s network of IP-connected computers, sensors, and devices allows care providers and patients to share information to a transformative degree by:

  • Giving care providers access to a greater number of devices for accessing protected health information (PHI).
  • Allowing patients to generate real-time biometric data with low-cost devices and applications.
  • Changing the nature of encounters with care givers from episodic to real time.

For clinical staff, the ability to interact with EMRs or other applications containing PHI from any device is invaluable, especially in creating a push vs. pull dynamic for access to patient information and health records. Today’s care providers are highly mobile and the IoT can provide the ability to seamlessly use connected devices within a single session.

For patients, the IoT offers the ability to participate in their own care. Specific patient opportunities include:

  • Generating valuable health information from wearables and home health devices.
  • Allowing real-time voice, video, and data streaming for telemedicine.
  • Enabling more active patient engagement. Instead of requiring patients to take initiative to look up records or set appointments, messages can be proactively sent to patients informing them about updates or other relevant information

Some of these changes are already taking place on a small scale. But for the IoT to reach its full potential in healthcare, identity and data integrity will become critical as PHI moves from the hospital to the edge of patient care delivery, especially to assuage consumer concerns about privacy and security.

The data generated by a series of connected devices can only be captured, aggregated, analyzed, and put to meaningful use on a broad scale if the identities of providers and patients are verified. The data being generated, collected, and shared through networked devices must be protected with strong, usable authentication methods.

For providers, authentication is required to meet compliance and privacy regulations. If security considerations are baked into the IoT infrastructure, wearables or others devices can be assigned to particular users and leveraged to verify their identity. Similarly, proximity awareness technologies can simplify the user authentication process to access various devices and applications.

Patient authentication is also essential in the IoT paradigm because it ensures the correct information is being generated by and shared with the correct patient. Creating a one-to-one link between patients and their medical records can establish a foundation for additional forms of patient identification. As with providers, devices will become part of the digital credential set for patients, necessitating a secure enrollment process to bind one or more devices to unique patient identities.

Constructing the necessary infrastructure to properly manage and optimize the proliferation of connected devices in healthcare starts with security. A strong security strategy includes authentication technologies and processes to verify patient and provider identities to ensure that devices can only be used by authorized users. The communications channels between the devices within the IoT must also be secure to ensure the integrity of the information passing through them.

Putting these security building blocks in place will help create a closed-loop system in which patients and providers can securely interact in a more engaging, meaningful way. 

David Ting is chief technology officer for Imprivata.

View/Print Text Only View/Print Text Only
May 28, 2015 Readers Write 3 Comments

Readers Write: Trusted Data Is the Foundation for Advanced Analytics

May 28, 2015 Readers Write 2 Comments

Trusted Data Is the Foundation for Advanced Analytics
By Vicky Mahn-DiNicola RN


Much has been said about using advanced predictive analytics to improve the quality of healthcare. But one thing not receiving the attention it deserves is the pre-requisite of trusted data being sewn into the fabric of the healthcare organization. Every organization has data at its fingertips, but full value of that data can only be actualized if it is properly understood and trusted.

Take a relatively straightforward data element like a patient’s weight. While it is a simple, basic element, it can create havoc for analytics teams who discover there are upwards of 17 different places in their HIT systems where weight is captured. Weight is recorded in the emergency department flow sheets, nursing assessment intake forms, pharmacy profiles, ambulatory clinic records, and daily critical care flow sheets, just to name a few. Determining which weight field is the most reliable and appropriate to use is a difficult, lengthy process and one that is multiplied by hundreds of data variables required in advanced analytics projects.

Healthcare organizations are excited by the brilliant technology coming our way in the form of genomics, mobile health, and telemedicine. But too often, the cart is put before the horse. Just as bad ingredients guarantee a bad meal for even the best of chefs,  unreliable data in healthcare will inform inaccurate, even dangerous decisions.

Effective use of analytics is not something you can buy off the shelf from a vendor. Rather it is an organizational strategy, structure, and culture that have to be developed over time. While the technical and tactical execution is delegated to others, the chief executive in a healthcare organization is responsible for determining and overseeing this direction and progress.

The executive also needs to align the organization with data cooperatives and national groups that promote data standardization. National standards have historically been ambiguous, so it is important for providers to ensure they are not working in a vacuum, but have a common understanding of national guidance.

Diversity of systems and processes breeds confusion. Because there are many ways to express any given concept, there is a need for robust crosswalk, data mapping, and standardization to ensure data integrity within, between, and across organizations. This body of work is the responsibility of a designated data governance body within an organization.

Data governance implies far more than the maintenance of documents that describe measurement plans and reporting outputs.  It is a comprehensive process of data stewardship that is adopted by all data stakeholders across the organization, from the board room to the bedside.   Data governance is critical in order to standardize data entry procedures, reporting outputs, clinical alerts, or virtually any information that is used in clinical and business decision-making.  In the era of pay-for-performance and risk-based care, data standardization is mission critical for a true, accurate comparison to take place when evaluating an organization’s performance against external benchmarks and determining reimbursement based on value.

Another final step toward creating robust data governance structures is to create a data validation process. Data cleansing and maintenance should be automated, centralized, and transparent across the organization and should be designed to accommodate the needs of both clinical and business stakeholders.

A “data librarian” should be appointed to catalogue and oversee data elements across the healthcare system. The most mature organizations will implement a master data hub that is fully integrated into their application system environments so that changes are made simultaneously to all systems that need the same data. By doing so, a simple element like a patient’s weight will always be consistent in HIT systems.

Organizations need to recognize that the advanced analytics of tomorrow will only be achieved if the data we have today can be trusted. Those who succeed in establishing proper data governance will unlock the full value data can provide in our industry, beyond regulatory reporting and retrospective benchmarking initiatives to the more exciting prospects of predictive and prescriptive analytics.

Vicky Mahn-DiNicola RN, MS, CPHQ is VP of research and market insights with Midas+ Solutions, A Xerox Company.

View/Print Text Only View/Print Text Only
May 28, 2015 Readers Write 2 Comments

Readers Write: Demystifying Population Health

May 13, 2015 Readers Write 1 Comment

Demystifying Population Health
By Jeff Wu


Population health was once again a major topic of this year’s HIMSS conference. We saw even more vendors offering products, services, and solutions aimed at helping organizations deal with the challenges population health management presents.

Unfortunately, population health is such a broad domain that no singular solution really encompasses all of it. As a result, vendor offerings tend to only address a specific challenge. The wide and varying offerings across vendors adds confusion to the topic.

Population health shouldn’t be an industry buzzword that’s approached with trepidation. Instead, we need to understand the categories of challenges we are trying to address and the process for developing interventions to solve them. Let’s start by taking a look at the three categories that population health management interventions fall into.

  • Government or mandated interventions. For many organizations, this is the primary (and perhaps only) component of their population health strategy. Some initiatives, like becoming an accountable care organization, encompass requirements that address items that will be discussed below. For many organizations, this may be enough.
  • Enterprise population health interventions. These encompass interventions that are applied to the full population of an organization’s patients. Immunization and vaccination interventions or physical activity interventions are broadly applied to an organization’s full patient population. As organizations begin to try to standardize care, interventions aimed at variation reduction are also encompassed here.
  • Cohort, group, or sub-population health interventions. This class of interventions is the most varied and covers any intervention that addresses a sub-population of patients. Some examples of interventions in this category include health maintenance for diabetes patients, preventative care efforts like breast cancer screening in women over 50, and depression/PTSD screening for military veterans.

Population health management evolves linearly in three stages that borrow some classical tools from epidemiological tracking.

  1. Passive surveillance. Passive surveillance involves the retrospective analysis of a specific issue. This is the evaluation of data that already exists. Passive surveillance addresses questions like, "How many of our diabetic patients got a glucose test in the last six months?" or, "How many of our patients got flu vaccines last month?" Most analysis starts from this level of surveillance. It’s important to note that the majority of organizations are just getting to this point in their analytical journey. Implementation of the EHR tools necessary to do this level of surveillance are finally settling and getting to a state that allows for this to happen. To date many ‘organized’ population health based initiatives focus only on this type of surveillance. CMS’s MSSP ACO initiative is a classic example of this, where an organization participating in the MSSP ACO need only report their measures for the first year to receive their financial incentive.
  2. Active surveillance. The next evolution is active surveillance. If passive surveillance identified how many patients got flu vaccines last month, active surveillance would try and answer the question how many of our patients got a flu vaccine last week or yesterday. If passive surveillance told us which of our diabetes patients got a glucose test in the last six months, active surveillance would try to address which ones are being well controlled. In the epidemiological world, passive surveillance relies on existing data, while active surveillance implies a program that generates more recent and/or new data. This could be as simple as querying the medical record or running a report more frequently for simple cases or designing a whole new workflow and data elements to monitor for more complex cases.
  3. Prescriptive intervention. Once a population or initiative is identified, prescriptive intervention is what an organization uses to address the problem. This is where the art of evidence-based medicine comes in. We now have a lot more data to develop more fine tuned and effective interventions. Things like smoking cessation no longer have to be just a pamphlet, a discussion with a provider, and then a check box in the medical record. Full care teams can be coordinated and then patients can be monitored to help them with compliance.

As the industry and technology continues to advance, so do the tools at our disposal. Sentinel surveillance and predictive analytics offer some exciting opportunities to do more earlier. Additionally, the increased volume of data allows us to start taking a more in-depth look at cost-effectiveness and variation reduction between treatments for diseases.

It’s imperative to remember that every organization’s population health strategy will necessarily be different. This is because each organization’s population of patients is different. The vendor perspective often approaches organizations with packaged solutions, when in reality, it’s almost impossible for these solutions to be “one size fits all.” Even a product geared to a specific population health goal will require nuanced configuration to be effective for an individual organization.

Here in Madison, Wisconsin, population health interventions for UW Health are drastically different than Dean St. Mary’s or Group Health Co-op. UW is an academic medical center that draws high-acuity patients from across Wisconsin, while Dean has the region’s only obstetrics practice and GHC handles only primary care needs. While these organizations may benefit from adopting collaborative population health initiatives like the MSSP ACO (which both Dean and UW are a part of), their intervention focuses differ significantly based on their unique patient populations. Seldom can a product or solution apply to both, and even more rarely will it work for both.

As the industry continues to shift care delivery to encompass a population-based perspective, we are constantly introducing changes to our workflows, our assumptions, and most importantly, our expectations. These changes introduce uncertainty and apprehension, but they are also our greatest opportunity. It’s important to realize that population health management isn’t actually anything new. We’ve been here before—we’re just upping the scale.

Jeff Wu is a population health researcher at the University of Wisconsin-Madison.

View/Print Text Only View/Print Text Only
May 13, 2015 Readers Write 1 Comment

Subscribe to Updates



Text Ads

Report News and Rumors

No title

Anonymous online form
Rumor line: 801.HIT.NEWS



Founding Sponsors


Platinum Sponsors






































































Gold Sponsors























Reader Comments

  • phil: The key line here: I was thankful for the experience, fall included. I was better prepared to handle my bike under extr...
  • Roger: RE: Epic & Coast Guard. Did integration issues and an accidental system overwrite doom Epic at Cambridge, Wake Fore...
  • StudentExec: Wonderful post, Ed. Relevant and enjoyable as always....
  • Allison Stover: Thanks - you continue to be my favorite contributer to HISTalk. I am training for my 12th Half Marathon and ended up ha...
  • ThePACSDesigner: re: ICD-10. In deed it is Y2K for ICD-10! Here's John Halamka's report on the ICD-10 launch at Beth Israel Deaconess Me...

Sponsor Quick Links