Home » Readers Write » Recent Articles:

Readers Write: Harnessing Data to Support Population Health Management and the Evolution of Next-Gen Population Health Management

October 15, 2014 Readers Write No Comments

Harnessing Data to Support Population Health Management and the Evolution of Next-Gen Population Health Management
By Larry Schor


Accountable healthcare delivery is in the midst of a three-stage evolution as organizations increasingly turn to the promise of health IT and data to improve patient care and the bottom line.

First-generation accountable care is all about meeting process quality measures and closing gaps in care. At this stage, provider compensation is loosely tied to compliance with standards of care and protocols for specific common conditions, such as immunizations or screenings for diabetes and glaucoma. However, during this phase, financial rewards predominantly come in the form of bonuses for achieving quality measures with little or no downside financial risk.

As the industry currently evolves from first-generation toward middle-generation accountable care, new complexities are emerging. As such, healthcare organizations must manage clinical risk and begin assuming limited financial risk for identified patient populations.

Because both upside bonuses and limited downside financial risks exist at this stage, it is imperative that patients are clinically well controlled. Clinical data, therefore, becomes increasingly important for understanding risk. The historic reliance on claims data will no longer suffice. It is at this second stage of maturity that next-gen population health management becomes a critical strategy for managing population health because it effectively blends clinical and financial data.

Once healthcare organizations achieve next-gen population health management, mature accountable care — which is characterized by high-performing networks operating under full global risk arrangements — can be realized. This advanced care delivery model focuses on optimization and lowest total cost of care, achieved through high patient engagement as the result of personalized outreach and full next-gen population health management. The benefits of this stage of maturity will be realized through more comprehensive and precise analytics to personalize patient care, especially for those with chronic conditions.

While national initiatives are encouraging the forward momentum of accountable care, a bird’s eye view of the industry reveals that most healthcare organizations are in the very early stages of this cultural shift. Despite evolving reimbursement models that are gradually incentivizing quality outcomes and efficiency, organizations still must invest in the necessary infrastructure and embrace new workflows.

Electronic health record implementation provides one example. To date, even the most sophisticated EHRs usually are implemented as little more than electronic versions of existing processes and workflows. What is needed instead are more comprehensive and precise analytics to segment patients and personalize patient care.

Traditional analytics match demographic and claims data against quality measures, but engage all patients with similar conditions in the same manner. All patients identified with Type 2 diabetes, for instance, might be offered the same form of educational outreach. While EHRs today offer transactional clinical decision support at the point of care—some even are even adding managed care modules—they lack the capability to support the data-driven workflow of a distributed care coordination team. They are not designed to ensure top-of-license performance by all participants in the cycle of care, whether they are charged with managing a patient’s financial, clinical, or social welfare.

With new analytics, however, healthcare organizations can begin to offer a more tailored approach to care based on reviewing more comprehensive claims, clinical, and psychosocial data. As such, future success with population health management requires a data management infrastructure designed to capture an exploding volume and variety of data in real-time, much of it outside the claims stream.

Going forward, the strongest organizations will be those that most effectively harness, integrate, and analyze multiple types of data to inform the care of patient populations at the point of care. For example, claim clickstream data may reveal what treatments patients were provided in the past, but not necessarily whether they worked. Psychosocial data—such as whether a patient drives or has adequate social support—can have a massive impact on the success or failure of care, but is often embedded within provider documentation. Pharmacy, lab, and real-time clinical biometric data from devices such as wireless glucometers and scales is essential to effective care management.

Simply put, a real-time, 360-degree view of the patient, plan of care, evidence-based guidelines and psychosocial data results in more targeted, effective population health management, which in turn leads to better, more accountable care.

Effectively improving population health and the bottom line will require that data be translated into structured content readily available for analysis. Healthcare organizations today must take advantage of technology that allows storage and maintenance of data at its finest-grain level. It is no longer adequate to extract data, drop it into a data warehouse, and run pre-defined reports. This solution simply isn’t agile enough to answer new questions or handle increasing data volumes.

Instead, data must be conditioned, as data hygiene is extremely important for effectively using data out of the chute. Moreover, natural language processing also is becoming increasingly valuable for extracting actionable data from physician notes.

Cloud-based storage strategies, however, have proven most effective for supporting greater volumes of new data. Cloud environments offer an on-demand infrastructure capable of finding the right signals through the data noise that is expanding as the velocity, volume, and variety of data increases. Overall, healthcare organizations must employ technologies capable of clearly identifying relevant data and revealing that data at the point of care in a way that is quickly and easily consumable by providers.

Information is becoming a driver of consumer and clinical value in healthcare. In the near future, the use of data to enable effective population health management will align healthcare organizations with the cost and care quality goals so vital under accountable care reimbursement models. The most successful healthcare organizations, therefore, will be those that find new ways to use technology to leverage a wide range of patient data to improve both the bottom line and patient care.

Larry Schor is SVP of Medecision.

Readers Write: Are You a “Check the Box” Executive?

October 15, 2014 Readers Write 1 Comment

Are You a “Check the Box” Executive?
By Dana Sellers


Over the Labor Day weekend, CMS released an update for Stage 2 Meaningful Use that provides some relief to providers struggling to fully implement the 2014 requirements. That’s great, but here’s the problem: Meaningful Use is not just an exercise to check some boxes off.

It’s more than implementing CPOE. It’s more than getting your physicians to use a problem list. It’s more than the incentive dollars. It’s about getting value beyond the implementation.

If your organization attested in 2012, you have been continuously collecting discrete standardized and coded data for close to two years. You’ve done the heavy lifting and you’re continuing to do so for Stage 2. Now you have a foundation that provides a common data platform across the organization with standardized vocabularies, regardless of different EHRs or other operational systems.

While you may be awash with all kinds of data, Meaningful Use provides specific clinical data that you can focus on. You have a means to ensure that all parts of the organization can begin to measure the same things the same way.

In a recent project, we turned our new cadre of Quintiles researchers and biostatisticians loose on a bunch of clinical data. We imposed one important ground rule: we limited the data to things that were already being collected for Meaningful Use. We asked if they could find anything interesting. In a matter of weeks, they discovered significant findings that relate directly to outcomes and cost.

Here’s the cool part. Every organization that has attested for Meaningful Use has the data needed to do the same kind of study.

Are you looking at Meaningful Use as a check-the-box exercise, or are you looking to drive real value? Have you considered the possibilities of using your current data foundation in order to improve workflow and processes?

For example, changing how the patient intake process occurs, not only for better collection of data, but also for safety and care coordination. Can you move beyond monitoring clinical process measures to conducting analytics that will drive insights for better care and outcomes?

It takes the organization thinking about Meaningful Use as a foundation for value. It requires change.

  • Break down organizational silos. No single department owns the challenges facing organizations around quality, cost, and performance. Yet multiple departments and stakeholders often try to answer the same types of questions, resulting in inefficient processes as well as conflicting answers. Create cross-departmental, multi-disciplinary teams to address these challenges.
  • Get data governance in place. Information transformation requires that data is consistent, accurate, and timely. This foundational data is a start, but still requires an organizational structure and process to provide direction and decision-making to create common definitions and apply common standards across multiple stakeholders and departments.
  • Start with the foundation. There is tremendous value in the foundational MU data. Begin to explore beyond the standard Meaningful Use process objectives. Use this foundation to evaluate how well standards are applied. Explore for other clinical insights like impacts of the use of evidence-based orders on specific disease-based populations in this data set.

Meaningful Use is not an IT project or task to cross off a project list. It is a foundation for an information journey to value.

Dana Sellers is CEO of Encore, A Quintiles Company of Houston, TX.

Readers Write: What to Ask When Deciding to Take the CMS 68 Percent Settlement Offer

October 15, 2014 Readers Write No Comments

What to Ask When Deciding to Take the CMS 68 Percent Settlement Offer
By Bill Malm


The October 31 deadline for providers to decide whether or not to take the 68 percent settlement offer from CMS is quickly approaching. This settlement enables any provider to withdraw their pending inpatient appeals in exchange for a timely partial payment which equals 68 percent of the net allowable amount. CMS is offering this settlement in order to reduce the volume of inpatient status claims currently pending in the appeals process and to alleviate the administrative burden to both providers and Medicare.

Many healthcare organizations have already submitted their request to take this agreement, but if your hospital is still weighing the pros and cons of doing so, some key factors for consideration include the following.

  • Does your hospital have significant dollars at risk or a high volume of outstanding appeals? Hospitals with a large number of appeals and/or a significant amount of revenue tied up in the appeals process may benefit from seeing the appeals through the ALJ process. Interest payments alone could outweigh any reason to settle.
  • Was your hospital’s appeal strategy based on an internal review process that appealed only strong cases, writing off weaker cases? Hospitals that had a denial review strategy and chose to appeal only those cases with a reasonable likelihood of success may not want to agree to a 32 percent reduction in payment and forfeit the Limitation on Recoupment 935 interest. On the other hand, hospitals that appealed cases indiscriminately are promised 68 percent of the net payable amount. In the end, this may result in a higher payment for these organizations.
  • What was your hospital’s recoupment strategy? Is the expected interest on a successful appeal financially substantive or marginal? If your facility allowed immediate recoupment of overpayments following receipt of Demand Letters, then your claims are not subject to 935 interest. Conversely, 935 interest is owed when claims were involuntarily recouped and you prevail at the ALJ level. For claims that wait years for an ALJ hearing, this payment could be substantial.
  • How badly do you need your money? This may seem like a silly question, but keep in mind that strong appeals and long wait times will likely result in payments with greater than 100 percent value, but it may be a very long time before you see that money. Can you afford to wait? Hospitals that accept the settlement can expect reimbursement within 60 days of a fully executed agreement.
  • What is the cost associated with pursuing your appeals? Hospitals with high costs associated with the pursuit of appeals may want to consider the settlement.  Those costs might include consultants, attorneys, and expert witnesses. The cost of internal personnel time and resources should also be considered.

Deciding whether or not to take this settlement depends on a variety of circumstances. The final decision should be based on a position of financial strength and a strategic choice rather than a short-term stopgap out of necessity.

Bill Malm is senior manager of revenue integrity communications at Craneware.

Readers Write: I-STOP May Be the Biggest Health IT Game-Changer of All

October 8, 2014 Readers Write No Comments

I-STOP May Be the Biggest Health IT Game-Changer of All
By Tony Schueth


Over the years, e-prescribing has needed and seen its share of enabling game-changers as it competes against the sub-minute it takes to write a paper prescription. But none may be bigger than the New York state law, I-STOP, that requires all prescriptions to be transmitted electronically by March 27, 2015.

More impactful than Meaningful Use, the Medicare Prescription, Drug Improvement and Modernization Act (MMA), or the Medicare Improvements for Patients and Providers Act (MIPPA)? Potentially yes, but not necessarily in a positive way or limited to e-prescribing

In August 2012, the governor of New York signed the Senate Bill 7637/Assembly Bill 10623: Internet System for Tracking Over-Prescribing (I-STOP) Act into law. At the time, New York’s Attorney General Eric Schneiderman said, “I-STOP will be a national model for smart, coordinated communication between healthcare providers and law enforcement to better serve patients, stop prescription drug trafficking, and provide treatment to those who need help.”

Unlike other states where it is optional, New York prescribers are required to check the New York State prescription drug monitoring program registry database before writing a prescription for any controlled substance. I-STOP has other provisions, as well, such as improving safeguards for distribution of prescription drugs prone to abuse; medical education courses, public awareness efforts; and establishment of an unused medication disposal program.

The State of New York obviously sees e-prescribing as part of a bolder effort to curb prescription drug abuse. Kudos to the state legislators for getting that. Electronic prescriptions flow through a secure, closed channel from prescriber to pharmacy. Each step of the process is electronically logged. It is unquestionably a vast improvement over paper in reducing fraud and impeding diversion.

A law of this magnitude from a bellwether state is impactful in many ways. Other states are surely watching and, should it be successful, will likely follow. But if it’s not successful, there will be implications, too.

The impact begins with pushing along the nascent effort of e-prescribing of controlled substances (EPCS). Although the DEA passed an interim final rule in 2010 permitting such an effort, its uptake has been slow. According to Surescripts, as of July 31, 570,000 EPCS prescriptions were transmitted via their network year to date. That puts EPCS adoption at far less than one percent since about 500 million of our 3.85 billion retail prescriptions are for controlled substances.

As a recent case study supports, the biggest challenge for EPCS is that physicians still don’t know that they can prescribe controlled substances electronically and pharmacists aren’t aware they can accept them in that manner. This lack of awareness keeps physicians and pharmacists – especially independents – from requesting such functionality from their vendors. As a result, too many EHR, e-prescribing, and pharmacy vendors assign a lower priority to EPCS with what little bandwidth they have outside of Meaningful Use, ICD-10, and NCPDP SCRIPT 10.6.

According to Surescripts, only 14 prescriber vendors are certified for EPCS. While those include three of the top five EHRs and the “ePrescribing inside” markets share leaders DrFirst and NewCrop, version issues, client factors, up-sell challenges, and other considerations mean that only a  small number of EHRs are EPCS-enabled.

Nationwide, the pharmacy side is not there yet, either. While the two largest chains are able to receive and process controlled prescriptions electronically, many of the smaller chains and independents are not. According to Surescripts, 31,000 of 67,000 pharmacy locations are enabled for EPCS.

After enhancing their products to meet the New York guidelines, however, both EHRs and pharmacy software vendors should find taking their EPCS solutions elsewhere to be less of a challenge.

All that said, nationwide, it will continue to be the classic, “Which comes first, the chicken or the egg?” situation. To get past that, it takes education and coordination, which are elements of I-STOP.

For the education component, I-STOP charged a workgroup of stakeholders and the Department of Health with responsibility to guide public awareness measures. Our EHR clients tell us they aren’t hearing from their New York customers, so are physicians in New York unaware of I-STOP? A simple Google search on I-STOP yields a few articles, most from when it launched. Hopefully, a huge campaign is planned.

The prescriber consequences are significant, especially for physicians. According to the New York Bureau of Narcotic Enforcement (BNE), non-compliance is punishable by a $2,000 fine, imprisonment not exceeding one year, or both. Furthermore, it is considered to be professional misconduct by the applicable professional boards, which could lead to suspension or revocation of professional licenses.

With government mandates, enforcement is always a question. People who know the BNE and New York’s Attorney General Office say they wouldn’t hesitate to enforce this, especially given the larger objective of curbing fraud and abuse. To be sure, I wouldn’t want to be the vendor that caused the $2,000 fine or any of the more serious consequences.

From a coordination perspective, there’s nothing like a mandate and deadline to get everyone on the same page. But the consequences are to the prescriber, not the pharmacy, and the EHR vendors just have to deal with upset clients.

So, how is it going? We don’t have the most up-to-date data about New York specifically. As of December 31, 2013, 62 percent of physicians in New York were routing prescriptions, according to Surescripts. While a lot can change in a year, 38 percent of physicians are not prescribing electronically, and as noted earlier, fewer than one percent are e-prescribing controlled substances nationally. Only one of the top two EHRs in New York is EPCS-certified through Surescripts, so the others have a lot of ground to cover by March 27, 2015.

What if large numbers miss the deadline? Issuing fines to that many prescribers will be a logistical — not to mention political — challenge. They could issue an ICD-10 or MU Stage 2-like extension or waivers. However, there’s a lot of frustration out there about those delays. New York issuing such outs or just not enforcing the law could further lessen the impact of all mandates, arguably making I-STOP the biggest game-changer ever, and not just for e-prescribing.

Tony Schueth is CEO of Point-of-Care Partners of Coral Springs, FL.

Readers Write: A CIO’s Perspective on the Options for Health System Analytics

October 8, 2014 Readers Write 2 Comments

A CIO’s Perspective on the Options for Health System Analytics
By Gene Thomas


Buying an EMR is an important decision, but choosing an analytics solution is far more important. In today’s healthcare marketplace, installing an EMR is table stakes. Granted, it’s necessary and expensive table stakes, but it’s still just the starting point.

The real key to transforming healthcare performance lies in analytics and the humans that use and make data-driven decisions. An EMR captures the data. Analytics uses that data to deliver the insight needed to improve the quality and cost of care.

Improving quality and cost is on everyone’s mind. At the organization where I serve as CIO, Memorial Hospital at Gulfport in Mississippi, it is a critical priority. The majority of our volume comes from Medicare and Medicaid beneficiaries and the uninsured. We are a not-for-profit, single-hospital system. We have to focus on costs and quality in order to continue to serve our community.

Fortunately, we’re advancing steadily along the path of putting infrastructure in place to drive the necessary improvement. We rolled out our integrated EMR this spring and we are now implementing our analytics solution.

I started this article by stating how important analytics is. Choosing what type of analytics solution to implement was not a decision we took lightly. I want to outline here the factors we considered as we made that choice.

I wouldn’t say that selecting our EMR solution was easy, but the fact that there were only a handful of viable options certainly simplified the process. Choosing an analytics solution was a different story. A wide variety of analytics solutions are available and they all claim to drive quality and cost improvement. We looked at BI tools. We researched multiple vendors with point solutions that address areas like capitated payments, fee-for-quality, and ACOs.

Ultimately, we decided that the right solution for our enterprise-wide analytics strategy would be an enterprise data warehouse (EDW). But even then there were several possible paths to take. We could build our own EDW, we could adopt our EMR vendor’s emerging EDW solution, or we could implement an EDW solution from a third-party analytics specialist vendor.

We quickly dismissed the option of building it ourselves. We simply didn’t have the time or resources for a trial-and-error, homegrown approach. That left us to decide between our EMR vendor’s EDW and a specialist’s solution. We went with the specialist’s solution.

Our EMR vendor’s EDW was relatively inexpensive and there was something attractive about the convenience of having one less vendor to manage. Still, I approached their EDW offering with some skepticism. I trusted their ability to handle all of the transactional functionality that is an EMR vendor’s core competency, but analytics is not part of that core competency.

Ultimately, we set three criteria as essential in an vendor. Any analytics vendor we selected would have to demonstrate the following.

A significant track record with analytics

EMR vendors really don’t have an analytics track record. Their analytics experience lies mainly in tactical operational reporting. They can easily tell me how many of my patients are on a certain medication, but my improvement initiatives will require much greater sophistication.

Specialist vendors, on the other hand, have been living and breathing nothing but analytics for years (and sometimes even decades). The best ones can share concrete examples of how their solutions have driven measurable quality and cost improvement.

The agile data architecture required to handle big data

Our EMR vendor is obviously an expert on transactional systems architecture, but that doesn’t translate to expertise in architecting a powerful analytics solution that runs on a completely different type of database. With so much volatility in healthcare today, I wanted to be sure I had a flexible architecture for analytics that could expertly adapt to new rules, standards, vocabularies, and use cases.

The ability to integrate data from multiple systems, including competitors

This was a huge consideration for us. EMR vendors are generally unwilling or unable to pull data from external sources, particularly competitive systems. We needed a solution that was source-system neutral and only the third-party analytics specialists could deliver that. Integrating data from just about any system you can imagine is their core competency. My understanding is that some EMR vendors have recognized the need to allow integration of data from beyond the EMR, but they are years behind the specialists in terms of doing this well.

I recently came across a 2013 survey by CHIME that found that 80 percent of CIOs believe analytics is an important strategic goal, but that only 45 percent feel they have a handle on it. I don’t claim to be an expert on analytics, but I hope that this brief account of my experience so far will be helpful to some.

My biggest piece of advice to any colleague that has yet to tackle analytics is to get started as soon as possible. I believe that CIOs need to change. Our focus can’t be just on the bits, bytes, databases, and servers. All of that is still an important element of what we do, and I have a staff that takes care of those details, but my focus as CIO is to provide data and information to all stakeholders—our executives, our clinicians, our patients, and more—to help drive better outcomes. That means a top area of focus for me is on analytics.

Gene Thomas is chief information officer of Memorial Hospital in Gulfport, MS.

Readers Write: Communicating Across the Continuum

October 8, 2014 Readers Write No Comments

Communicating Across the Continuum
By Steve Whitehurst


As consumerism continues to permeate the healthcare industry, hospitals must place more emphasis on how they treat their patients across the entire care continuum, inside and outside the four walls of their facility. To do this, patients must be addressed at every touch point in order to fully meet their needs and sustain their satisfaction.

Though increasingly important, many hospitals struggle with supporting patients’ 24/7 communication needs due to limited staff, reduced budgets, and unclear communication expectations. Yet without a communication plan in place, interacting with and keeping patients engaged and satisfied can be very difficult, thereby limiting a hospital’s ability to sustain an enhanced patient experience, increase patient satisfaction, keep patients compliant with their care plans, and build brand loyalty—not to mention it can potentially increasing the risk for readmission.

By creating a comprehensive communication strategy leveraging a mixture of communication services leveraging live operators and clinicians as well as automated technology platforms across the continuum, hospitals can effectively manage their interactions with patients inside and outside the facility’s walls to increase both care quality and patient experience.

With Meaningful Use incentives and other regulations driving the implementation of patient portals, many healthcare organizations are pouring resources into electronic communication platforms that use email or direct messaging to communicate with patients. Although these methods certainly improve engagement, they are not always effective at reaching all patients or providing personalized attention.

For instance, most patient portals are capable of delivering educational material to patients. However, there’s no way of knowing whether the patient actually reads and understands the information unless someone directly asks and engages the patient in conversation. Whether face-to-face or over the phone, once personal interactions are lost, the organization loses its ability to make sure patients are adhering to their medications and complying with their care plans.

Conversely, hospitals that employ high-touch communication strategies, such as the following, can engage patients across the continuum to promote more favorable outcomes, in addition to realizing measurable improvements in patient satisfaction and HCAHPS scores.

  • Live voice follow-up after discharge. One of the most effective methods for reaching patients, this communication tactic enables organizations to know when they’ve reached patients and provide personalized communication to their patients by asking and answering questions, ensuring patients are adhering to their medication and care plans, and providing additional education. Statistics show that patient satisfaction improves when communication services like live voice are leveraged at specific touch points in a patient’s care continuum.
  • Communication to support care coordination. For patients with complex conditions, multiple comorbidities, or who are high-risk for readmission, communication services can improve care coordination by going beyond discharge follow-up to help patients navigate their care plans. These services, for instance, can help patients with medication management (including medication reconciliation and adherence), disease management, and health coaching. As an example, when patients are prescribed new medications or receive changes to previous prescriptions, it can be difficult to figure out which medications should be taken, when they should be taken, and specific side effects to look for. Care coordination follow-up support can help patients navigate these questions, ensuring they take medications in the most appropriate way. Likewise, these services can also identify barriers patients may have in obtaining or taking their medications and offer solutions to help with adherence.
  • Answering services. Inbound services that receive calls from patients provide opportunities for healthcare organizations to address questions or concerns immediately rather than waiting for providers to return phone calls. When these services are managed by highly trained teams qualified to listen to and answer patient concerns, it allows organizations to meet patients’ needs more efficiently in a timely manner, thus increasing patient satisfaction levels.
  • Automated services. Although live voice interactions are most effective for facilitating conversations between patients and providers, automated services can be useful for routine patient outreach, such as reminding patients to schedule and attend upcoming appointments or refill prescriptions. By leveraging automated services in appropriate situations, organizations can concentrate their human resources on more meaningful interactions with patients.

Whether managed in-house or outsourced, a comprehensive communications plan will enable hospitals to continue the patient-provider conversation long after patients leave the facility, enhancing their experience throughout the entire care continuum.

Steve Whitehurst is the vice president and general manager of Stericycle Communication Solutions.

Readers Write: Will You be Shocked by Shellshock?

October 1, 2014 Readers Write No Comments

Will You be Shocked by Shellshock?
By John Gomez

Here is a riddle for you. What is old yet new, and at the same time scary yet contained, while being known yet potentially a big surprise?

If you answered Shellshock, you collect $200 and go to the front of the class. Shellshock is a new computer exploit that was discovered in the past few weeks, but “new” isn’t exactly right. The actual vulnerability, which may compromise Linux- and Unix-based systems, has actually been around for 25 years. While newly discovered, it is actually rather old.

Shellshock is scary because it allows someone to take over a Linux- or Unix-based computer (such as your Mac, iPhone, iPad, BSD, Red Hat, Ubuntu system) and bypass all security. This is accomplished by accessing the old-school command line shell known as Bash and executing commands that to most of us make no sense at all in this day of graphical interfaces.

Want to see if your Mac, Linux, or Unix system is vulnerable? Open a terminal or command shell and type in the following (no, it won’t give me super secret ninja access to your system):

env x='() { :;}; echo vulnerable’ bash -c ‘echo this is a test’

If you see the word “vulnerable” after you hit enter, your system is at risk.

Before you get worried, keep in mind that in most cases, if you have a firewall up and running, you are more than likely safe (assuming your firewall isn’t at risk of Shellshock, but that is beyond our focus in this article). 

Shellshock exists because a programmer 25 years ago made a coding error in a fundamental part of the operating system. Shellshock isn’t some trick or hack — it’s just exploiting a bug. Unlike a worm or virus that is purpose built, Shellshock is really just a how-to for hackers to embrace.

Most vendors of Unix/Linux-based systems such as Apple, Red Hat, and others have already released patches to fix the bug. The challenge you face is making sure that you deploy these patches quickly. A smart hacker could take control of your system and prevent the patch from being effective, so time isn’t on your side. You need to move fast.

You can ask your security team to check their IDS and other logs to see if someone has attempted to gain access to your system using the Shellshock vulnerability. If your team sees active Shellshock scans, you should really do a triple check of your systems and determine if you were penetrated. It isn’t easy to figure out, and more than likely you should get professional support if you suspect you were scanned and successfully attacked.

We have covered why Shellshock is old yet new and scary yet contained. What about known and yet a surprise? It is known simply because we know the targets. Most hackers are going to attack web, database, and other IP-based servers on your network that run on Linux/Unix. Where is the surprise?

The surprise is that what may be most vulnerable are those things we think of the least. Most connected devices we find in a healthcare environment (from a lab to a clinic to a retail pharmacy to a doctor’s office and everything in between) are based on some form of Linux/Unix. This not only includes your medical devices and diagnostic equipment, but also things like your security system, CCTV cameras, and smart door locks.  

Being we live in the age of the Internet of Things (IOTS), chances are that if your device or system has an IP address or a call-home feature, it is running some form or Linux/Unix. That means that you could be in a for a big surprise if a hacker gains control of your MRI, CT scanner, or something less critical like your CCTV cameras.

The good news in all this (if there is good news) is that most devices run a form of Linux/Unix known as BusyBox, which is not vulnerable to Shellshock. Also, most devices in healthcare environments do not make use of Bash, which is the component that is vulnerable.  

That said, you really shouldn’t just hope that your devices are running BusyBox or that Bash isn’t present. It would be wise and prudent (and some may say legally responsible) to evaluate your risk by contacting your vendors to see what devices are vulnerable. Ask the vendor directly what they intend to do and how quickly if they have an at-risk system. Don’t be surprised if many of your device vendors don’t know if they are at risk or not — many deploy Linux/Unix systems and cannot clearly detail if Bash is enabled or not.

If the device you are concerned about involves patient care, you have a critical decision to make and need to clearly understand if there was an attack. For the most part, patient care devices such as an MRI are behind (or should be behind) several layers of network protection or only have a one-way connection using a trusted tunnel. While hoping that is true, check, double-check, and triple-check because lives are at stake.

You should also make sure your physical security organizations understands the impact of Shellshock on their systems. In this IOTS world, many of the devices that could be vulnerable may have nothing to do with traditional IT. For instance webcams allowing security teams to monitor infrastructure are IP based and many are now accessible to security officers from smartphones. Most webcams have built-in web servers based on Linux/Unix and live on your network in some form or fashion.  It is important that those who are responsible for non-IT/HIT electronic devices also make sure that their devices are secure and not vulnerable to Shellshock.

Lastly, you should be checking with your HIPAA business associates to understand their response to Shellshock. You have an ongoing requirement to ascertain your BA’s ability to protect patient health information. Like Heartbleed, Shellshock is considered a significant threat and could easily be used to compromise PHI. Failure to assure that your BA is taking steps to secure your PHI on their networks from Shellshock could be an issue for your organization.

So there you have it. Shellshock is all at once old and new, scary and contained, and known. Because of this brave new world of connected everything, it could very well provide you with the surprise of your life.

John Gomez is CEO of Sensato of Asbury Park, NJ.

Readers Write: Feeling the Pain of Meaningful Use? Try Vicodin

September 29, 2014 Readers Write No Comments

Feeling the Pain of Meaningful Use? Try Vicodin
By David Ting


Meaningful Use Stage 2 requirements state that eligible professionals must transmit more than 50 percent of all permissible prescriptions electronically using a certified EHR system, an increase from a 40 percent threshold in Stage 1.

Although the use of e-prescribing continues to increase (Surescripts reports adoption rates of about 73 percent), many CIOs and other healthcare leaders I meet think they will struggle to achieve the 50 percent threshold without including controlled substances, which are almost always prescribed using paper-based prescriptions.

In today’s frenetic healthcare environment in which clinicians are constantly pressed for time, many default to a single workflow of using paper prescriptions for all medications for simplicity. This decreases utilization of e-prescribing and makes it harder to meet the required 50 percent threshold. In addition, it decreases patient safety and provider efficiency and results in greater inconvenience for patients who are forced to not only pick up a prescription at the provider’s office, but also endure longer wait times at the pharmacy.

For those CMIOs feeling the pain of trying to meet Meaningful Use e-prescribing requirements, Vicodin might provide the answer.

In August, DEA issued a ruling to reclassify hydrocodone combination products such as Vicodin from a Schedule III to a Schedule II controlled substance. This ruling puts tighter controls on how these highly addictive medications can be prescribed. For instance, doctors can prescribe a maximum three-month supply (previously it was six months) before patients need another prescription to be written.

Consider that in 2012, 135 million prescriptions were written for hydrocodone combination products in the US. The ruling could conceivably double this number, which would increase the total number of prescriptions for controlled substances by 25 percent or more. This increase in volume will exacerbate the challenges created by the inability to e-prescribe controlled substances, particularly as it relates to dual workflows for prescribers and the consequential impact on meeting Meaningful Use requirements.

For this ruling to be successful and have the desired impact on reducing drug abuse, systems like electronic prescribing of controlled substances (EPCS) must be implemented to ensure the tighter restrictions are enforced without creating barriers for physicians to write and refill prescriptions for patients truly in need. EPCS makes it far more difficult to obtain highly addictive prescription medication for illicit purposes without placing any undue burden on patients with legitimate needs.

Now that EPCS is allowed by the DEA, providers can choose to include controlled substances as part of their equation for Meaningful Use, as long as the decision applies to all patients and for the entire reporting period. With an EPCS system in place, healthcare providers and organizations can more easily meet Meaningful Use Stage 2 requirements for e-prescribing while also realizing all of the additional benefits of EPCS. 

David Ting is founder and chief technology officer of Imprivata of Lexington, MA.

Readers Write: The Key to Transitioning from PQRS to Risk-Sharing Agreements

September 29, 2014 Readers Write No Comments

The Key to Transitioning from PQRS to Risk-Sharing Agreements
By Mason Beard


If you, Dr. X, report on quality for your Medicare patients, you’ll get a nice bonus. That’s how PQRS started out—a purely pay-for-reporting initiative.

The bar for this program was set fairly low to encourage providers to meet the requirements. But in its crafty way, the federal government has steadily shifted the program away from the carrot and toward the stick. In fact, the incentive phase of the program ends next year. Providers who don’t measure up will simply experience the stick. In other words, the government has moved its focus from reporting to performance.

I don’t want to paint CMS as conniving to punish poorly performing providers. The truth is that PQRS has been a very successful program and is driving an important focus on the quality of care delivered to Medicare beneficiaries. Another quite evident truth is that CMS is not stopping here.

CMS isn’t just creating government programs and regulations; they’re trying to change provider behavior to rally around outcomes reporting and better care. They’re pushing providers inexorably toward value-based reimbursement (VBR). Reading the tea leaves of what’s happening with PQRS—and considering the proposed Merit-Based Incentive Payment System (MIPS)—the government is going all in on this.

Technology can help providers who are doing PQRS reporting prepare to move successfully into more sophisticated VBR arrangements. From the beginning of PQRS (PQRI at the time), it was evident that providers would need HIT tools to help them track, measure, and report on quality measures. PQRS has been around long enough that there are now a variety of tools providers can use to help them fulfill this requirement.

Not all of these tools can help providers meet PQRS requirements and transition to more sophisticated VBR arrangements using the same infrastructure. Make no mistake — such a transition is essential. To manage it successfully, organizations don’t need a point solution, they need a platform.

Here’s why. The new PQRS, the MIPS of the future and other VBR arrangements don’t focus on reporting outcomes; they focus on improving outcomes. The only way organizations will be able to improve outcomes is by implementing what I call the 4 As:

  • Aggregation. Providers need to be able to gather clinical and administrative data from the disparate technologies across their system.
  • Analytics. Providers need some level of analytics to understand their population, identify gaps in care, and assess risk.
  • Action. Providers can’t just aggregate data and analyze it and then not do anything about it. They need some system in place to engage their patient population (via care management workflows, automated outreach, reminder letters, etc.) and fill gaps in care.
  • Accountability. They need to be able to prove the value back to the stakeholder. Simply put, this means reporting the outcomes for a variety of initiatives to CMS and other payers.

It’s important to note that PQRS point systems only address the fourth A: accountability. (Even then, they may not have the flexibility to adapt to the various reporting initiatives that will be required by multiple payers as time goes on). If a PQRS solution only addresses the fourth A, it can’t prepare an organization for risk. It doesn’t create processes that move the organization away from a fee-for-service world.

A platform, on the other hand, enables provider organizations to enter the value-based world. Performing PQRS reporting on a platform is the perfect starting point. As providers fulfill the PQRS reporting requirements, they can layer in processes that help them transition from a reporting workflow to a more proactive workflow focused on population health management. With the aggregated data and intelligence they build up around their performance in the process, they become equipped to enter into VBR arrangements with commercial payers.

A platform delivers an easy, turnkey way to branch out from PQRS to address other, more sophisticated payer initiatives. The time to plan for this transition is now because the stakes are rising. Every plan—both government and commercial—is developing some kind of risk- or performance-based initiative. With a platform, providers don’t have to take the plunge immediately. They can first dip their toes in the waters of PQRS and then move steadily into a world of improved outcomes and value-based reimbursement.

Mason Beard is senior vice president of solutions and co-founder of Wellcentive of Alpharetta, GA.

Readers Write: EHR Divorce Rates on the Rise – Four Factors that Predict Electronic Health Record Adoption Success

September 29, 2014 Readers Write No Comments

EHR Divorce Rates on the Rise – Four Factors that Predict Electronic Health Record Adoption Success
By Heather Haugen, PhD


Despite healthy growth in the implementation of EHRs, the lack of effective adoption plans is impeding their intended purpose of helping healthcare providers improve care.

In 2013, nearly six in 10 hospitals have adopted at least a basic EHR system. But not all EHR users are happy with their purchase. In fact, 30 percent of hospital executives admit they are dissatisfied with their system, and 30 percent of current EHR solutions are replacements of another product.

Research reveals that a myopic focus on the go-live event is the root cause of low EHR adoption rates and increases the chances of organizations’ divorcing their EHR vendor. In contrast, those healthcare leaders who focused on the processes and discipline required to achieve adoption and maintain it over the long run were more likely to achieve the clinical and financial outcomes they expected from the EHR.

EHRs have the potential to improve both patient care and work efficiencies in delivering care, but these outcomes are only possible when clinicians adopt the best practices and workflow needed to continually improve how the system serves the organization.

The research published in “Beyond Implementation: A Prescription for Long-Term EHR Adoption” revealed four key factors that predict EHR adoption:

Engaged Clinician Leadership

Engaged clinician leadership is the most important predictor of successful EHR adoption. IT leaders are often given primary responsibility for the organization’s EHR system. While their skills and experience are necessary for functionality of the EHR system, the input and expertise of nurses, physicians, pharmacists, and other clinical staff is essential to driving staff’s proper use of the EHR and improved clinical and financial outcomes.

Moreover, leaders of EHR adoption efforts need to be highly engaged through and beyond go-live. While this may seem like a given, competing priorities make it difficult to maintain the degree of engagement required after the go-live event. When comparing organizations with successful implementations and those who have become dissatisfied with their system, our research shows that engaged leaders:

  • are well informed and aligned in how they communicate the value of the EHR;
  • empower clinicians to make decisions about how the EHR should be implemented and used;
  • understand the degree of change required and set priorities appropriately; and
  • stay engaged for the life of the application.

Effective Training to Ensure Proficient Users

The way in which clinicians and users are trained impacts their level of proficiency. In healthcare, we often use traditional methods – one-time “training events” that occur at a certain time and place. The trainers focus on teaching the hundreds of features and functions available in the system over multiple days with the goal of reaching “mastery” by the session’s end. But this is an ineffective, insufficient, and unrealistic method.

Bill Rieger, CIO of Flagler Hospital (FL) originally thought that implementing his health system’s new EHR would include traditional, classroom-style training. This approach required training sessions to begin a full six months prior to go-live due to limited classroom space and a large clinical staff. By making the switch to using scenario-driven simulations – a hands-on method – the hospital was able to begin the initial training program just six weeks prior to go-live, resulting in increased retention and a more successful launch.

Simulation-based training that focuses on helping users become proficient in new workflows and best practices results in dramatically better outcomes compared to traditional training and takes about half the time. This style emphasizes an accumulation of experience over time. It happens continuously in the specific work environment and leverages role-based content to provide a level of individualized fluency. Critical thinking skills and retention of content improve significantly when the goal is proficiency, in contrast to attending a more passive training event.

Measuring for Improvement

Defining metrics to track proficiency in EHR use and communicating them with clinicians is another critical step for adoption. Without it, improper use of the system is more likely to continue. Through a process of peer-to-peer auditing and regular progress reports, clinicians can track their performance and improve in necessary areas – ultimately enhancing patient care in the process.

In addition to providing feedback for clinicians, measurement can help optimize the EHR platform. For example, if simulation reports reveal that a large percentage of users click in the wrong area when completing a certain task, it would indicate a point of non-intuitive design. Armed with such data, the EHR vendor may be able to modify the system for improved use.

Adequate Resources and Prioritization Beyond Go-Live

A focus on the people, processes, and evaluations to improve adoption over the lifecycle of the application is required for long-term success, yet very little attention is typically paid to sustainment efforts.

Even when a new EHR is well accepted by clinicians and they become proficient in the application, adoption is a process that can never be finished for two reasons:

  • There will always be new clinicians and residents entering the healthcare organization. An organization with a successful EHR program will ensure that these individuals receive every bit of guidance and have the ability to be just as successful in their use of the EHR as those clinicians who had been present at the go-live event.
  • EHR systems will always be subject to upgrades and changes. While the changes are meant to enhance the system, they will do more harm than good if end users do not receive the appropriate level of guidance when being introduced to new workflows and processes. 

Too often, people that are recruited to work on EHR adoption efforts eventually revert back to their previous roles and work on their former projects, leaving the organization without proper resources to account for this inevitable cycle brought on by time and turnover. Flagler Hospital overcame this tyranny of time by keeping implementation committees in place and by focusing on long-term, ongoing education even through multiple EHR upgrades.

Moving from an EHR implementation focus to an EHR adoption focus requires a significant overhaul in how we think, how we lead, and how we behave. Now is the time for healthcare leaders to evaluate their organization’s performance in these four key areas that predict EHR adoption.

Heather Haugen, PhD is managing director and CEO at The Breakaway Group, A Xerox Company of Greenwood, CO, which recently delivered an HIStalk webinar on this topic that can be viewed as YouTube replay.

Readers Write: The Consultant and the Investor Look at Cerner’s Acquisition of Siemens

September 24, 2014 Readers Write 2 Comments

The Consultant and the Investor Look at Cerner’s Acquisition of Siemens
By Lynn Vogel, PhD


The publication of the recent HIStalk interview with Marc Grossman and a post by Ben Rooks offer a rare opportunity to learn about the different perspectives of consultants and investors using the Cerner acquisition of Siemens as a case study.

Full disclosure: I’ve known Marc (the Consultant) for close to 20 years and I consider him to be one of the top HIT consultants in the business today. I don’t know Ben (the Investor) personally, but was so impressed with his discussion of the MModal acquisition several months ago that I started an email exchange and have deep respect for his understanding of the financial aspects of HIT.

But the perspectives of the Consultant and the Investor couldn’t be more different. Here are some noteworthy excerpts from the Marc Grossman interview and from Ben Rooks’ recent “From the Investors Chair.”

From the Consultant

  • A lot of it’s going to depend on where the Siemens client is.
  • I believe Cerner is buying Siemens for intellectual property. On the patient accounting side, I think they’re also looking at the RCO base that Siemens has, which is a great revenue stream for them.
  • Given Cerner’s history and the industry’s history over the last 20-30 years, Siemens Soarian and Invision product support is going to go downhill.
  • I think they probably won’t sunset it officially for at least 10 years, just because I know Siemens does have numerous contracts which are going out 10 years.
  • Like we’ve seen with many other vendors that purchased other systems, Cerner is clearly not going to put R&D money into two patient accounting systems and two clinical systems if they have an integrated system now.
  • I just don’t see any indication that Cerner is going to continue the development of any of the Soarian or Invision products.

From the Investor

  • Cerner is now the clear sector leader and will enjoy mammoth cross-selling opportunities given the product fit.
  • This was a good use of both the cash hoard Cerner had built up on its balance sheet and its high-multiple stock, allowing the deal to be almost instantly accretive – especially with the $175 million in pre-tax synergies the company guided to in its press release.
  • Cerner’s shares are up almost 10 percent as I’m writing this post, more than twice the S&P — Ms. Market seems to be more excited.
  • The vast majority of analyst commentary has been positive and we here at the Chair are fans of the purchase as well.
  • The only thing that gives me pause as a long time Cerner watcher (and fan) is that the company has zero history of large-scale M&A and the sector has not been kind to such large-scale bets in the past.
  • What’s especially noteworthy here though is that the cultures of the two companies are literally more than an ocean apart
  • That said, the price Cerner paid clearly de-risks the acquisition, and Cerner is known for its strong culture

The Consultant starts with the impact on the client. How the customer base responds will depend on where they are currently with their HIT implementations. Will customer support for Invision and Soarian go downhill now? Will any level of customer support for Siemens’ products last beyond 10 years? Will there be any R&D for Siemens products going forward? Finally, the question about whether Cerner will “continue the development of any of the Soarian or Invision products?”

The Investor is looking at the Siemens acquisition from an almost purely financial perspective. Is this a good use of Cerner’s cash? What’s the impact on the stock price? What’s Cerner’s experience with large-scale acquisitions? How will the cultural challenges be addressed? In general, this looks like a “good deal.”

In some ways, the comparison of these two perspectives underscores one of the major challenges of HIT today. The investors are looking at the money, while the customers are looking for continued product development and ongoing support.

Unfortunately, the boards of most HIT companies are dominated by investors, with little input by those who understand either healthcare or information technology (Ben’s earlier analysis of the MModal situation is an excellent example). What’s missing, specifically?

  1. Looking under the covers. From an IT perspective, the Cerner acquisition of Siemens demonstrates again that acquisitions too often proceed without understand any of the underlying IT challenges. The code bases are different, the database architectures are different, the standards for code libraries are different, etc., etc. Recall the Allscripts acquisition of Eclipsys. A big selling point about this deal was that they were both based on Microsoft tools are architectures. We now know how that turned out. Siemens’ financial products are generally considered to be stronger than Cerner’s, but integrating disparate product suites is a challenge that has eluded almost every previous merger of HIT companies.
  2. Understanding how IT decisions are made by healthcare customers. Boards often have little understanding of the healthcare business and even less about how IT decisions are made in healthcare. It can be a long, slow, and often tortuous process (accelerated certainly by recent federal incentives) with lots of customer concern about long term support (note Marc’s observation that even lab systems typically last a decade or more). As a result, assumptions about how quickly financial returns can be generated are often way off the mark and the result is the demise of the acquired company.
  3. Leverage and financial returns dominate. Cerner is probably looking at the Siemens’ customer base almost as a captive audience, there for the picking and over time replacing their Siemens products with Cerner’s. We can only assume that Siemens reached the same conclusion about the SMS customer base at the time of that acquisition, and we know how that turned out. On the other hand, simply eliminating a competitor over time is a strategy that many companies both inside and outside healthcare have found to be successful. But taking Siemens out of the marketplace may also leave Epic is a much stronger position.

There are lots of discussions about whether the healthcare industry is really all that different from other industries. Even Drucker noted its extraordinary complexity. But when companies make decisions without a deep understanding—at the executive and board levels—about the technology, about what does make the healthcare industry unique, and worry more about the money than the customers (often not realizing that it in the end it is the customers who provide the revenue), we have a better understanding of why the HIT business is so challenging and probably filled more with company failures than successes.

I would argue that one of the solutions here is more Board level input from experienced HIT professionals. Across the industry, we see company boards with investors and occasionally clinicians, but virtually no HIT professional role.

Full disclosure: I was elected to the board of Glytec, an HIStalk sponsor, due to my specific HIT experience from inside the industry. During the course of my term, I have learned an enormous amount from the investors and the clinicians on the board and from the feedback I have received, it seems that my contributions as an experienced HIT professional have been valued as well.

There is an enormous amount of HIT expertise available that companies could use, but seldom do. This includes former (occasionally retired) CIOs, CIOs who are able to serve on boards while continuing full-time IT responsibilities, and consultants, particularly those who have experienced HIT from inside healthcare organizations, etc. Smart HIT companies would do well to take advantage of this talent to contribute to the success of their business.

Lynn Vogel, PhD is a principal with LH Vogel Consulting, serves on the board of Glytec, is a member of Next Wave Health Advisors, and serves as a senior advisor to Sophic Alliance.

Readers Write: The Elephant in the Waiting Room: Healthcare Organizations Can No Longer Afford to Look the Other Way on Patient Pay

September 17, 2014 Readers Write 6 Comments

The Elephant in the Waiting Room: Healthcare Organizations Can No Longer Afford to Look the Other Way on Patient Pay
By Sean Biehle


In the past five years, patient payment responsibility has risen dramatically and continues to increase with the implementation of the Affordable Care Act. More people insured means more people who don’t understand their health insurance and many of the plans on the healthcare exchanges are high-deductible plans. At the beginning of the year, Aetna CEO Mark Bertolini projected patient pay responsibility to climb to 50 percent of the healthcare dollar by the end of the decade.

The New Normal: High-Deductible Plans

Once considered a last-resort alternative for those with limited income, high deductible (HDP) or “catastrophic” plans have gone Fortune 500. As a result, self-pay now includes a lot of the people who have insurance with HDPs.

  • A 2012 Rand research brief estimated that half of all workers on employer-sponsored health plans could be on high-deductible insurance within a decade.
  • The average deductible in employee sponsored health plans was $1,100 in 2013, but deductibles in the healthcare exchanges average between $3,000-$5,000.
  • A report released by S&P Capital IQ estimates that 90 percent of S&P 500 companies will shift their workers from employer-sponsored insurance plans to health exchange plans by 2020.

As more Americans are paying a greater proportion of their healthcare costs out of pocket, getting reimbursed for the patient pay segment could now be the most important number to a healthcare organization’s bottom line. Collecting from patients is estimated to cost up to three times more than collecting from payers. 

Focus on Education

Healthcare organizations should make it their mission to help patients understand their bills, educate them on payment options, and help them navigate any insurance issues. Seventy-five percent of patients say that understanding their out-of-pocket costs improves their ability to pay for healthcare.

Plus, the Hospital Value-Based Purchasing (VBP) portion of the Affordable Care Act returns higher Medicare reimbursements based on patient experience scores. The payment process is integral to the patient experience. Patients who don’t understand their bills, what they owe, and why they owe it tend to give lower scores on patient satisfaction surveys. Last year, 2013, more hospitals were penalized than bonused, leaving millions on the table.

Create a Consumer-Focused Culture

Because patients are paying more, they are using social media and other online tools to shop around for physicians and hospitals that not only provide the best care, but also the best service. Service is more than having a good bedside manner. Service means providing frequent and transparent patient communications, especially as it relates to billing.

  • Emphasize patient satisfaction over collections.
  • Create a consumer-focused culture – align staff incentives with patient satisfaction.
  • Perform patient satisfaction surveys to help identify potential problems before they escalate and determine reimbursement rates.

Be There When and Where It’s Convenient for the Patient

Many patients work and they have to take off work to visit their office or facility. Don’t make them take more time off when it comes to having to figure out their bills.

  • Offer extended call center hours, including open evenings and weekends, to optimize patient access.
  • Offer online payment platforms to provide 24/7 access for making payments, arranging payment plans, and viewing and updating demographic and insurance information.
  • Offer services in multiple languages so no patient gets left behind.

Make It Convenient and Easy for Patients to Pay

Connecting with patients in a meaningful way helps them understand the how and the why eliminates any confusion when it comes to their bills. Show patients how easy paying their bills can be.

When possible, consolidate payments and balances across the entire patient care continuum. This makes it easy for the patient to pay everything in one place and drastically simplifies the patient pay process.

Provide multi-channel patient communications and payment options:

  • Point-of-service (POS) payment portals make it easy to collect balances at the time of service.
  • Automated phone/IVRS options enable payment over the phone.
  • Online payment processing for debit and credit cards and electronic checks provides 24/7 access for patient payments.

Additionally, a number of provider organizations have developed pricing transparency tools for consumers to access clear and easy-to-understand billing information.

Offer Payment Plans Upfront

Medical bills can be daunting and patients are far less inclined to pay on larger balances, especially over $400. However, informing patients of their payment options at the time of billing greatly increases the odds of getting paid.

Offer Incentives for Self Pay

Unlike insurance companies, patients don’t get to negotiate adjustments to what they are charged for a procedure. Sweetening the pot by offering payment incentives can greatly increase reimbursement and patient satisfaction.

Treat Patients with Dignity and Respect During the Billing Process

Patients aren’t just numbers. In fact, we’re all patients, so it’s easy to see how frustrating it can be in the absence of clear, reliable, and efficient patient billing communications. Healthcare is one of the very last vestiges of American culture in which the consumer doesn’t have access to complete transparency to what they will owe before they incur the costs

Until the continuum of patient communications can be fixed from the inside out, it’s imperative to treat each individual with the respect and dignity they deserve throughout the entire billing process. Help them avoid collections at all costs using the strategies above and show them that the care provided continues beyond the bedside.

Expected Results

When focused on patient education and satisfaction, physician groups and hospitals can expect stronger reimbursement on patient balances. Educated patients pay their bills. Satisfied patients translate to higher Medicare reimbursements. Many organizations have seen their reimbursement rates increase by more than 30 percent after adopting patient education and satisfaction programs.

Emphasizing customer service can also help verify insurance and uncover secondary or additional insurance. This can dramatically streamline the revenue cycle process. Many organizations find after talking to their patients they discover additional insurance on accounts originally categorized as patient pay.

Lastly and perhaps most importantly, providing clarity of communications builds patient loyalty and increases trust over time. Patients who are highly satisfied with an organization’s billing process are twice as likely to return. Plus, over 80 percent of patients who are satisfied with their billing experience are likely to recommend an organization to their friends.

Sean Biehle is marketing manager for MedData of Brecksville, OH.

Readers Write: Protecting the Network with Endpoint Security

September 17, 2014 Readers Write No Comments

Protecting the Network with Endpoint Security
By Jeff Multz


CIOs are forever struggling to ensure that technology helps their businesses run efficiently and effectively and that their networks are protected. That’s a heavy undertaking for any business, but especially for healthcare organizations, as medical professionals rely on a bevy of computer devices (including their own.) These devices have become high targets for threat actors who are increasingly attacking endpoints (laptops, workstations, and mobile devices) to break into networks of healthcare and financial institutions.

The FBI recently issued an alert following a highly publicized attack on a US hospital group that warned healthcare companies they are being targeted by hackers.

"We are seeing an increase in attacks within healthcare," said Ann Patterson, senior vice president and program director of the Medical Identity Fraud Alliance. "The healthcare sector’s security and privacy controls differ from more secure industries, such as financial services, and [healthcare organizations] may be easier targets."

Why is healthcare so attractive to threat actors? A few reasons.

  • Nation states are after the intellectual property of medical equipment and pharmaceutical companies so they can copy their products and sell them more cheaply.
  • Threat actors are also after personal identifiable information (PII) of healthcare providers, which attackers use to open up new credit card accounts under the names of patients. That PII includes a patient’s name, address, phone number, Social Security number, date of birth, and billing information.

Because it is often difficult to evade network detection devices such as firewalls and intrusion detection/prevention systems (IDS/IPS), attackers are going directly to the end user via phishing or watering hole attacks to break into networks. The trusting souls who click on the links or attachments inside these emails have no idea that when they do, that malware is automatically downloaded.

While there have been new innovations in protecting the network from outsiders, there’s been a dearth of innovation in endpoint security technology. Since antivirus (AV) software is not very effective, it has become quite easy for attackers to infect endpoints. Defenses for endpoints are still mostly malware-signature based, so threat actors run pre-attack tests to see which signatures are being detected and which ones aren’t.

This ploy has worked so well that attackers sell their testing services to other attackers, running a service similar to that of VirusTotal, which scans malware for detection rates. However, unlike VirusTotal, the threat actors don’t share the results with AV vendors.

With about 200,000 new pieces of malware being created each day, according to Kaspersky Labs, and much of the malware being polymorphic, signature-based threat detection methods can’t keep up with the pace of new malware creation.

It’s hard to keep endpoints, especially personally owned endpoints, up to date with the latest patches. There are more applications than ever that people download onto their devices and all these applications have flaws, making them easy targets for attackers. Additionally, Web-based technologies are being designed so users can do anything over the Web using HTTP or HTTPS, which subverts perimeter-based controls and makes the Web an easy way to deliver malware.

With the Internet of Things (IoT) growing daily, the front line of attack has moved from servers to the endpoint. This year alone, IDC expects shipments of smart-connected devices (PCs, tablets, and smartphones) to surpass 1.7 billion units worldwide. Organizations are being attacked via their endpoints, yet have no idea they’ve been compromised.

The average time it takes for organizations to discover they have been compromised is 229 days and 69 percent of the discoveries are made from outside sources, such as federal authorities, the FBI, or private security companies.

An organization must be able to see all activity taking place on the endpoints so they can remove attackers as soon as they enter the network. The only way an organization can know whether it has been compromised is to continuously monitor the network and the endpoints. It needs to see what’s going on at the endpoint and tie that to what is going on across the network. Anomalous activity must be spotted as soon as it occurs.

An organization should be able to determine what happened when the affected system ran, who the system communicated to, what changed on that system, what the lateral movement was, and what tools were used. Endpoint activities should continuously be collected and logged. The information should be fed into a system that takes an end-to-endpoint view of all that has occurred, providing full visibility into a network. Organizations can then take that information and adapt their infrastructure, user training, and applications accordingly to defend the network.

As soon as anomalous activity is spotted, an investigation should be initiated. If the investigation reveals that an endpoint was compromised, the system can provide a blueprint of all activity that has occurred, and all activity as it is occurring, so the threat can be contained as quickly as possible.

The 2014 SANS Health Care Cyberthreat Report found that endpoint devices not only provide challenges for securing them and the network they are connected to, but also for recovering from an incident. Continuously scanning endpoint devices that are connected to a network can tell an organization exactly where the infection is hiding in the endpoint and how to remediate it. Breaches can often be remediated without being wiped or re-imaged, alleviating the possibility of inadvertent data loss during a wipe.

Work stations are critical attack vectors, and organizations that have a multitude of high target endpoint devices must always be on high alert for attacks. For now, there is only one way to do that. Gartner calls the solution Endpoint Threat Detection & Response, also known as Advanced Endpoint Threat Detection. It should be mandatory for any organization that needs to protect its business.

Jeff Multz is director of North America Midmarket for Dell SecureWorks of Atlanta, GA.

Readers Write: The Engaged Patient – Are They Really?

September 12, 2014 Readers Write 8 Comments

The Engaged Patient – Are They Really?
By Helen Figge


Sorry to be the bearer of mediocre news, but despite the growing conversations around the value of engaging patients in their own healthcare, the term “patient engagement” is a really cute flavor of the month healthcare buzz phrase.

Many seem to be confused by what “patient engagement” means. It lacks a standardized approach to its interventional aspects or for a better sense rules of engagement.

The major thrust for patient engagement legitimacy comes in most part to the expansion of health insurers rewarding providers based on services that support the improvement of a patient’s health and wellbeing. Likewise, the anticipation that engaging the patient will reduce the utilization of healthcare resources plays into this concept. Finally, healthcare providers were vocal concerning the 10 percent patient engagement threshold originally mandated in Stage 2 of Meaningful Use and these “squeaky wheels” enabled a pushback to 5 percent.

The legitimacy behind engaging the patient appears evident because investing in the healthcare consumer who utilizes our healthcare resources (you and me) and turn creating healthier assets is the overarching goal of better health. This in turn fundamentally assumes we lower costs of healthcare. So, from this point of view, “investing” in consumers of healthcare and helping them to be more effective partners in our own care makes good sense practical sense, right? 

One would think and hope so. Based on several research sources, it is indeed possible to meet the requirements to support these patient initiatives through various technologies on the market today, like the patient portal, yet only a small percentage of providers are currently supporting these efforts.

The basic question is how do we engage patients to want to stay in control of their own health’s trajectory? What motivates and stimulates and excites someone to want to get and keep control of his or her own health destiny?

This is the one question gone awry, because the majority of consumers consistently participating in their health is quite low, with the majority of less than 5 percent consistently engaged if at all in their healthcare. Many practitioners are finding out that each and every one of us is motivated by something different when it comes to our own healthcare.

My dad was a great example of a non-compliant chronic disease sufferer who, when he felt better stopped taking his meds. Only when his blood glucose reading recordings were hooked up to his senior citizen daily calendar for dating (he was 87) did he remember to record his blood sugar readings for his care coordinator. One could say my dad’s health was directly stimulated by his desire to see which eligible senior citizen lady friend was going to the senior center that night for bingo.

In order for any patient engagement opportunity to be successful, each and every engagement might have to be customizable with each step in the care process to create a meaningful role for patients and their families and specifically tailored in such a way that helps patients acquire the knowledge and skills they need to effectively manage their health and do so in a consistent manner.

We also need to realize that some patients are not prepared to take on any type of role in their healthcare and might not be able to cope with their various illnesses regardless of the enticement. This is oftentimes a concern with those suffering from chronic diseases, where they will need to engage for the duration of their lives to keep and maintain their health.

I equate this type of patient engagement to eating your favorite food every day until after a while, boredom sets in. Your favorite food loses its luster. You just stop eating it and substitute another. When patients are unable to manage these types of often complex tasks, the result is less control over a person’s health and well being and ultimately higher health care and human costs.

If patient engagement has a chance to really hit the numbers we hope it will, it is important to tailor the care and instructions a patient has to support that care. In healthcare, we tend to provide the same amount of support regardless of the patient population or skill set at hand. We always try to standardize approaches, which 99 percent of the time is great, but patient engagement is that 1 percent where it just can’t be done. This is the reason for the low numbers in patient engagement we are seeing firsthand today. Each patient needs to be motivated in his or her own way to accomplish the empowerment needed for successful personal intervention.

Finally, another point to consider in all of this when trying to motivate a patient to “engage” in their own care is that it cannot be monetarily based. Patients are not motivated by financial incentives direct or otherwise for long-term behavior change. It is documented that highly engaged patients with the skills and knowledge respond better to the monetary gains of engaging in their healthcare, while some less than enthusiastic patients accept defeat much easier and accept their disease states and the sequelae of them regardless of intervention and assume it is what it is and thus accept any increased cost incurred by the disease state to be inevitable.

So when considering patient engagement, consider the patient first and foremost because patient engagement is based on the patient’s active and sustained participation in managing their health. It is a marathon race, not a sprint. Only through this mechanism will this lead to better health outcomes.

Proactive action to change and maintain our health into productive health behaviors is the mainstay of the effort. At its center is the concept of taking an active role in our own health and healthcare. We know objectively it can be measured using various tools like the Patient Activation Measure (PAM). This testing helps to identify a patient’s engagement level and used as a tool for improving activation for health and wellness, although I’m not sure how helpful it is right now given the lower-than-expected statistics of patient engagement overall.

The evidence suggests that increasing a patient’s engagement in their own health trajectory can have an impact on controlling costs and helping patients to become healthier – to live longer with fewer complications. The problem is that no one has come up with a standardized approach as to how to engage a patient for long-term success to any disease resolution. 

Maybe we need to interview each patient and see what drives him or her to wake up each morning. For my 87-year-old dad, it was trying to find a date for bingo night at the senior citizen center. Only after he answered his blood glucose reading did the senior citizen screen pop up. Maybe we need to do something like this for each and every patient. 

Helen Figge, PharmD, MBA is VP of clinical integrations of Alere Accountable Care Solutions

Readers Write: State-Based Health Insurance Exchanges

September 12, 2014 Readers Write No Comments

State-Based Health Insurance Exchanges
By Jason Deck


I was invited recently to join a forum at Northwestern University to discuss the state-based health insurance exchanges (HIX). It included leaders of the state exchanges, legislators, consultants, insurance industry executives, and physicians. Topics included policy discussion, pricing and transparency issues, and growth plans. 

I came away with one resounding thought: there are an awful lot of very smart people working tirelessly on the challenge of ensuring that all Americans have affordable access to healthcare. It is inspiring to witness.

Christine Ferguson, director of the Rhode Island exchange, made a powerful statement: “Nothing like this has ever been attempted before. Ever.” She was referring to the task of overhauling the extremely complicated incumbent system of healthcare delivery by bringing together public and private sector interests, policy making, technology, and care providers. It is a daunting task.

How they did it depends on the state, all of which had to have some version of a working exchange in place and functional by October 1, 2013. Some states chose to build their own, while others partnered with the federal government’s Healthcare.gov. That was not a perfect process, but given the complexities and the timeframe under which these states were operating, I will posit that the result was a success.

Much of the discussion in our forum revolved around the way forward. Three key themes emerged:

  1. Integrated eligibility platforms
  2. Consumer outreach and education
  3. Financial viability

A key tenet of the Affordable Care Act is the insurance subsidy offered to individuals and families below 400 percent of the poverty limit. When individuals go to their state’s HIX to shop for and purchase an insurance policy, one of the early and important steps is to determine whether they qualify for a subsidy. Their eligibility is determined in part by first confirming the individual is not eligible for Medicaid. 

The insurance exchanges and Medicaid applications are not integrated except in a few states, so applicants must register themselves with the HIX, then go to Medicaid and apply specifically to be denied. They then receive a denial number to bring back to the HIX to continue their eligibility application.

Confused? Everyone is. Integrating these systems will deliver a quantum leap in the end user experience and ease of registration.

Which brings us to the second key initiative: consumer outreach and education.

The HIX executives who spoke at the forum agreed they had an early wave of low-hanging fruit their exchanges would enroll, but that they were quickly (and happily) working their way through that population. The next frontier is more difficult — small business owners and individuals with some resistance to buying health insurance.

To that end, the exchanges invest heavily in community outreach with sophisticated marketing programs, local offices, advertising, branding campaigns, etc. The results are promising. There was a productive conversation about which techniques are delivering the best results in new enrollment. Without exception, every HIX executive to whom I spoke named education and outreach as a top-of-mind concern.

On a related note, I was pleasantly surprised at the general tone the HIX leadership uses with regard to their constituents. They talk about improving the quality of the products and service, ensuring that call center wait times are kept short, and agents are trained to educate the customer. Many of these exchanges are organized under a government agency, but their leadership sure talks like private sector business leaders.

The federal government invested funds to build these exchanges, but not for their ongoing operations (there were more than a few jokes about the feds serving as the VC investor to the exchanges.) Jokes aside, the need to become financially solvent is a real issue and the audience proved creative in their approaches.

Many great ideas were exchanged, ranging from implementation of user fees, advertising strategies, use of insurer assessments, and excise taxes. While each state will ultimately land on its own model for its P&L management, the normal issues of operating costs, well-forecasted growth, and disciplined budgeting will be increasingly important to the HIX executives as they move from their launch phase into steady state.

The forum provided a great opportunity for many stakeholders in the development of state-based health insurance exchanges to discuss their progress, lessons learned, and ideas for the future. There will be bumps in the road to be sure, but we all have much to be excited about as the evolution to a more efficient and transparent health insurance ecosystem continues.

Jason Deck is vice president of strategic development of Logicworks.

Readers Write: Lessons Learned from the CHS Breach

September 3, 2014 Readers Write 2 Comments

Lessons Learned from the CHS Breach
By John Gomez

In early 2014, a group of security researchers began to suspect that some implementations of SSL — a commonly used method to encrypt data — were not as secure as the name would imply. Their thesis was rather elegant, actually more art than science, but fascinating just the same.

They hypothesized that although the cryptographic algorithms may well be secure and protect over-the wire data (data sent across a network) from prying eyes, the actual programming used to implement the algorithms may have flaws. If there was a flaw in the underlying implementation — such as how memory is managed, for instance — then SSL could become a tool for nefarious agents to exploit and compromise network security.

On April 1, 2014, two groups of security researchers (Neel Mehta of Google and Codenomicon) announced that such a flaw did exist in SSL, specifically in OpenSSL. This vulnerability came to be known as Heartbleed.

Within hours of the vulnerability being announced, sites around the world were compromised, including the Canadian Revenue Agency, Mumsnet in the UK, and others. Early estimates showed that well over a million sites and X.509 certificates were at risk of attack. On April 12, 2014, University of Michigan reported that a server in China had attacked a decoy server at U of M with advanced tools to exploit the Heartbleed vulnerability.

The revelation of the Heartbleed impact created shock waves. Some, like the Electronic Frontier Foundation, called it “catastrophic,” and Forbes columnist Joseph Steinberg declared, “Some might argue that [Heartbleed] is the worst vulnerability found (at least in terms of its potential impact) since commercial traffic began to flow on the Internet.”

Within days of the disclosure, the Federal Bureau of Investigation released a private industry notice (or PIN) to the healthcare industry that stated, “The healthcare industry is not as resilient to cyberintrusions compared to the financial and retail sectors, therefore the possibility of increased cyberintrusions is likely.”

Flash back to February 2014, when a group of hackers known as Unit 61398 was suspected of launching cyberattacks against a variety of US industries, specifically the financial, transportation, energy, and healthcare sectors. Unit 61398 is believed to be, according to cybersecurity firm Mandiant, a top-secret unit of the People’s Liberation Army based in Shanghai.

Since February 2014, it has been learned that Unit 61398 is not specifically tasked with cyberattack missions, but it is believed to have developed highly sophisticated software and hardware tools that could be used for cyberwar, typically known as cybermunitions. Speculation is that these tools are made available to independent hacker groups for “testing purposes only,” although this has never been confirmed.

One such group believed to have gained access to these tools is APT 18, a well known and highly sophisticated group of Chinese hackers with branches in Shanghai, Hong Kong, Singapore, and the United States. APT is shorthand for a type of cyberattack known as Advanced Persistent Threat. APT 18 specializes in conducting those attacks.

It is believed that within hours of the Heartbleed disclosure on April 1, APT 18 started customizing the tools from Unit 61938. One they possibly created is a Remote Access Tool (or RAT.) A RAT works by using a carrier to gain access to network systems, usually by rather simple means. For example, a RAT can be deployed inside a network as a result of a user watching a video, reading an e-mail, or opening a file.

A highly common way of distributing a RAT is through a trusted third-party communication, which is typical in exchanges between business associates and covered entities in healthcare. A RAT could also be deployed to a medical device with a vulnerable call-home feature and network access.

The RAT allows remote control of a network, servers, devices, and much more. Just like a real rat, a cyber-RAT is infectious and can cause severe damage. The current thinking is that APT 18 targeted Community Health Systems (CHS) and successfully introduced a RAT before CHS could apply the Heartbleed patches to all of its systems. This is speculation, but highly probable.

It is also probable that APT 18 was successful because it had started targeting the healthcare industry in February 2014. Heartbleed was a fortunate development. It is also believed that CHS is not the only targeted healthcare entity and APT 18 may have compromised other healthcare organizations that may not have discovered the compromise yet. APT 18 may have used other vulnerabilities to infiltrate the CHS system, but for purposes of this article, we will continue to embrace the common thinking that Heartbleed was the key mechanism.

Criticizing CHS would be wrong. It acted quickly and there’s no evidence that it was negligent or dismissive. A better use of our time as an industry would be to learn from the CHS experience. The healthcare information technology sector is under attack by sophisticated enemies who will continue to persist their attacks on healthcare infrastructure as a means to undermine patient confidence in our ability to provide quality care and security.

We should be thankful that the CHS breach was limited to data because a RAT can take over an MRI, CT scanner, or EMR system to impact patient safety. Other cybersecurity researchers have demonstrated how to attack X-ray machines and other medical devices. The risk of attack on medical devices prompted the FDA to issue a memorandum on security to medical device manufacturers in June 2013. Although some manufacturers have responded to the memo in a positive manner, some have ignored its warning.

The most important lesson we can take away from the CHS breach is that we as an industry, to echo the FBI PIN, are “…not as resilient as other industries.” Which leaves us with the question: how do we improve our security stance and become more resilient?

Security takes money and a lot of it. There is no way to sugarcoat that fact or to make it more politically correct. NBC News recently reported that the annual cost of healthcare breaches is approximately $5.9 billion. Being secure means educating the board of directors and making it a core investment of the healthcare organization. There is no cheap answer or strategy.

Then, consider how to become aggressive about cybersecurity. Not assertive, but aggressive. Here’s an analogy.

Think of a healthcare system as a castle. Castles had multiple layers of security — intelligence, physical deterrence, internal and external defensive tools and strategies, propaganda, community allegiance, and, “Oh, crap, everything has failed” plans.

The safest castles — the ones that truly focused on protecting their inhabitants, allowing them to pursue a happy and high quality life — had the best layers of coordinated defense and offense. The castles that simply deployed the basics — a moat, drawbridge, some pots of tar, and maybe a few archers — soon learned that a persistent and determined attacker, like APT 18 or others like them, would eventually defeat these strategies.

In today’s terms, that means if you have firewalls, intrusion detection, penetration testing, DLP and similar tools, and policies and procedures, you either have been breached or you will be breached, just like the simpleton castle that did only the basics. A Level III castle.

If you take things up a notch, maybe employ a CISO, get advanced tools, and offer community education and compliance monitoring, you’re on the right track. Still, the odds are that you will get taken out. Your castle is a bit more sophisticated as a Level II castle. You added some alligators to the moat, armed the citizens, and took survival a bit more seriously. A good job, but you could do better. You are assertive, not aggressive.

The best castles invest in leading edge tools, form regional security councils to share ideas and help each other, create crisis response plans, educate their business associates, and use tools for real-time compliance monitoring, data discovery, classification and categorization, and locking down medical and mobile devices. This is a Level I castle. Just like in medieval times, it has not only strong external defenses, but also internal mazes, secret passages, trap doors, nightingale alarms, and have remote forces that can respond at a moment’s notice to surround the enemy.

It’s true that someone can get into even a Level 1 castle, but a Level I castle will survive longer than a Level II or III castle. In fact the odds are that a Level I castle will repel attacks and be standing after an APT or coordinated persistent attack.

If you had to put your family and loved ones in a castle that was going to be attacked, you would choose the Level I castle. You would do anything to safeguard the lives of those you love. In this day and age and within our industry, cybersecurity is not about privacy any longer. It is about safeguarding patient lives.

It doesn’t matter how the CHS attack happened. It is a wake-up call. Vendors, providers, and allied health entities need to build a Level I castle because they are at risk of coordinated and focused attacks. APT 18 is just one of hundreds of organized entities and thousands of independent attackers who are targeting healthcare and your castle.

To give you an example of how the stakes have been raised, ISIS (yes, the Middle East terror group) has several hundred computer programmers and hackers on their payroll. Take a few moments to let your mind wander about the damage a group like ISIS could cause to your castle. Some of those attackers will be happy with just taking data, while others won’t be happy until they take a patient’s life. 

CHS has shown that life for all of us in healthcare information technology has changed. The only remaining question is, whose castle will be next?

John Gomez is CEO of Sensato of Asbury Park, NJ.

Readers Write: Lessons on How to Survive in Healthcare

August 14, 2014 Readers Write 3 Comments

Lessons on How to Survive in Healthcare
By Nick van Terheyden, MD


From Samsung to Google to salesforce.com, the flurry of tech companies making a healthcare play over the past few months has left me both excited and dismayed. Excited because these companies have, in their own ways, revolutionized the way people interact with technology. Dismayed because of the steep hill they must climb and their battle to truly make their mark in the healthcare space.

We’ve seen it before. Tech companies dipping their toe in the water and then jumping back when they start sinking ankle-deep and losing their footing. From my 25+ years sitting at the intersection of medicine, technology, and policy, here’s my advice to these tech giants looking to make their mark in healthcare.

  • Get out of your comfort zone and consider the clinician. One of the biggest misses for these tech companies entering into healthcare is they’re expecting the patients to drive the revolution. That’s where they’re comfortable – with consumers. But so much happens on the clinical data side that needs to be factored in. Data needs to flow both ways. Even more importantly, doctors and nurses are drowning in a morass of technology and data that in many ways is hindering their ability to do their jobs effectively and with the passion they had when they entered the field. Add on the fact that working with and interpreting information gathered by a clinician about patients is not a pure art or science. That makes it hard to create consistency in working with it. While a patient app, sensor, or portal is nice, any company entering into healthcare needs to pay as much attention to the clinician as to the patient.
  • Build trust. We’re not making widgets. Google can’t mine healthcare data the way it mines ads and shopping data. It’s one of the major reasons they’re feeling the pain — it doesn’t fit into their core business. Healthcare data comes with all sorts of security and regulatory challenges, but even more important is that the healthcare consumer is a different kind of consumer and implicitly trusts their healthcare professional. They are already wary of ads targeted to their own needs – layer in data about their prostate exam and it becomes even more personal and they’re on the defensive. People interacting with the healthcare systems are typically vulnerable, stressed, and sometime scared. They need to trust their sources. Companies like Apple and industries like banking have built enormous trust with consumers, but replicating that in healthcare requires a different approach.
  • Stop looking for standards and release data from hostage. For these companies to be successful, they need to learn to operate outside of the world of data standards. Google was wildly successful moving into email, successfully because the iMac and Simple Mail Transfer Protocol (SMTP) made it easy. There’s no such advantage in healthcare. There are so many variations of standards – from Health Language 7 (HL7) to Clinical Document Architecture (CDA) to the Continuity of Care Record (CCR) and Digital Imaging and Communications in Medicine (DICOM) – that even when they do exist, they’re insufficient for sharing. But there may be an opportunity for Google or another company to actually create a new standard and have it take off. While Google is good at navigating and working with large amounts of data (i.e. Google Maps is constantly updating itself to have the most accurate information), the truth is that patients are ultimately going to own their healthcare data. For anything to change and for progress to be made, it all needs to be easily shared. How companies can turn a profit from shared data remains to be seen.

The more innovation in healthcare, the better for all of us. We need it more than ever. But any new entrant into the space needs a little Healthcare 101 to be successful and to make a difference in the lives of patients, clinicians, and their caregivers. 

Nick van Terheyden, MD is chief medical information officer of Nuance Communications of Burlington, MA.

Founding Sponsors


Platinum Sponsors




















































Gold Sponsors













Reader Comments

  • Mark Hochhauser: Sanford Health's CEO has been replaced. https://www.startribune.com/sanford-replaces-ceo-after-controversial-email-a...
  • IANAL: That's a lot of money for eMDs though it isn't clear how the financing works. At face value it would take compugroup mor...
  • Anne: Apologies for how rudely that came across. I do still question why our health is the responsibility of our doctors, but...
  • Elizabeth H. H. Holmes: Incredible. What an awful posture to take, what an awful example to set. It just encourages others to lie that they had ...
  • @JennHIStalk: Katie, if you're still looking for health IT history resources, check out Vince Ciotti's HIStory here: http://histalk.co...

Sponsor Quick Links