Home » Advisory Panel » Currently Reading:

Advisory Panel: Data Breaches

April 15, 2013 Advisory Panel No Comments

The HIStalk Advisory Panel is a group of hospital CIOs, hospital CMIOs, practicing physicians, and a few vendor executives who have volunteered to provide their thoughts on topical industry issues. I’ll seek their input every month or so on an important news developments and also ask the non-vendor members about their recent experience with vendors. E-mail me to suggest an issue for their consideration.

If you work for a hospital or practice, you are welcome to join the panel. I am grateful to the HIStalk Advisory Panel members for their help in making HIStalk better.

This question this time: Has your facility learned lessons from an attempted or actual data breach? Describe your major concerns and what actions you’ve taken.


Some of our breaches have been the result of thefts of computers and storage media that contained unencrypted PHI. We have since encrypted everything we can identify that contains PHI and have instituted mandatory training for protection of PHI as well as incident response. While we continue to suffer equipment losses from theft, we are losing encrypted equipment that does not entail a breach of PHI.


We have been lucky as we have not had an attempted or actual breach. My biggest concern is the "innocent" breach — the resident who manages to copy PHI to a jump drive or cloud drive like Dropbox. We’ve either encrypted or virtualized all of our laptops, so the USB ports can’t be used for this purpose. But clever people can always find a way to defeat our measures. One resident with a smartphone and an Evernote account can do lots of damage.


We lost a home care worker’s tablet (stolen from her car even though the policy is to keep it with you at all times) and we were concerned about the status of the encryption on this department’s devices. The tablet was retrieved quickly and we did determine that the encryption was on and that no PHI was accessed. We then did a complete inventory of our mobile devices and added a new encryption product to ensure we did not have an issue in any of our settings.


Our facility has not identified any major data breaches. We have had violations where individuals have inappropriately accessed protected PHI. On the one hand, I find it frustrating that some people still take a casual attitude towards HIPAA privacy and security when they should know better. On the other hand, it shows me that we still have much education to do.


We’ve worked through a couple of breach scenarios – including what thankfully turned out to be a drill.  Some of our key responses included:

  • Escalating the priority and completing a system-wide encryption process
  • Updating our BAA to ensure our business associates are taking encryption steps
  • Changing policies for how consultants and vendors work with our data (like for conversions and analysis we need)
  • Overall our focus has been how can we eliminate the risk – so when/if a device is lost/stolen it’s not a breach.
  • Require our business associates to assume all liability for a PHI breach they cause – this can be an interesting negotiation point. I find myself regularly pointing out that as an organization we don’t see it as an effective partnership for us to be legally required to have unlimited liability related to the breach and the vendor partner who caused the issue to have a contractual cap to their liability.

As we work with vendors, it becomes obvious who has either been through a breach or seriously thought through the scenarios. Some of them apparently don’t understand how big of a deal a breach can be from a PR or monetary issue.

I’m somewhat hopeful the new HIPAA guidelines will help address vendor awareness and accountability for a breach they cause.


We have not had traditional breaches. The much bigger issue has been from legitimate employees doing illegal things, like calling in narcotic or other prescriptions for themselves or their friends. Not surprisingly, they are more likely to do this via phone call than via ePrescribing due to both tracking mechanisms and the current inability to send narcotics that way. It still boggles my mind that a pharmacy will accept a narcotic Rx via voice mail from anyone claiming to be a doctor’s assistant, but won’t accept an authorized eRx! If the FDA wants to minimize illegal narcotic prescriptions, they should ban printed and voice prescriptions and insist they should ONLY be done electronically – they literally have it backwards!


We had a potential breach. On investigation, we found no PHI was compromised. However, we were just lucky. The cell phone number of a new physician’s assistant was entered incorrectly into a call list and non-secure text messages were sent to the incorrect number. Luckily no PHI was included and the recipient notified us pretty quickly. We have subsequently identified a secure messaging platform and will be offering it to all community providers at no cost to the providers and requiring all employed providers to use it. In addition, we have used this as a specific example of the problems with insecure messaging in general to raise awareness.


While a secure perimeter is still important, you have to accept that bad guys are eventually going to get past it. One example is that we have seen a sharp rise in “spear” phishing attacks. Each month we are receiving thousands of phishing messages that are becoming more polished and sophisticated. It only takes one slipping through to potentially create a breach. As a result and as a lesson learned, we are focusing more on monitoring internal data traffic and, importantly, patterns. The idea being that if our network is compromised, we want to identify it and take corrective action as quickly as possible. 


Not from any actual event here. However, we have an annual white-hat audit/hack to expose where we are weak in order to stay ahead of potential breaches. I am pretty confident you cannot prevent all of them, but need to perform diligence against what is known and do this on at least an annual basis  We may switch to twice a year due to the security threats ever changing, which our Board and Audit team likes.


No data breach (thankfully) 🙂


No one ever — I mean ever — reports a laptop as stolen to the police. I think it’s the untold rule of HIT right now. You don’t want to be in the paper, so don’t file a public police report. It’s not like any government entity knew you owned that laptop and it is no longer in inventory. Even if you use encryption on the laptops, its still just better to not have the press. 

Other major concerns. The default database usernames and passwords for many of the McKesson Horizon products are still out there in production. Ccdev is normally still the same password and what was said in 2009 is still true — changing the defaults makes for a whole hell of a mess to fix. Also, database fields that aren’t encrypted for personal identifying information. Allscripts Enterprise. No use of encrypted fields at least not in how its implemented by their contractors. Same for McKesson — you get the database,  you get the data, and there are some pretty easy Oracle exploits out there if you are going for HCI. You’d have to do a ton of research to know the server names, but most places don’t block people from plugging into their physical LAN via Network Access Control or other means, so it’s possible. The article this week about HIT’s security situation is coming reminded me of all the easy ways to exploit system databases and installs.


Yes, have a pre-packaged response plan and practice it regularly. The plan needs to cover your organizational reaction, your public response, as well as your technical response and forensics. Establish relations with an identity protection service. Establish relations with a hardcore forensics analysis service that can also provide "white hat" attacks against your system, as a broader threat assessment service. For the sake of optics, provide NAC background checks on all employees that could reasonably present a risk as an insider threat. And for God’s sakes, encrypt every hard drive — desktop and laptop. Also, provide password-protected, encrypted thumb drives to employees. Put them in the cafeteria and hand them out like mints.


The only breaches that we have had are ones that would not have been preventable by any technology or policy prevention efforts. One was a paper breach by someone who was taking records for her defense in a lawsuit and the other was someone who compiled an Excel spreadsheet of research patients and sent it to an unsecured Gmail account. Both were actions by internal ‘bad actors’, so that is my biggest concern. We encrypt most everything possible here, even thumb drives, so the chances of a breach due to theft or negligence is pretty small.


A few years ago we had a virus of the keystroke variety. It basically infects the device, captures keystroke information, and sends data to China. The server in China attempts to create identity information from the keystroke data. Through some quick action by staff, we closed the perimeter before any packets of information were sent. At this time, I wasn’t too concerned since we looked up the type of virus on our virus protection vendor’s website and it said "minimal risk" to corporate users. What I failed to understand is that "minimal" meant minimal chance of getting the virus. Once you were infected, then risk went to "high."

The fun began at that moment. Luckily, the users were unaware of the virus since all applications were not affected. It was basically IS vs. Virus. By the time we started our remediation efforts, this bug had infected approximately 1,000 devices. Our virus protection vendor did not have a patch for this variant, so we were on our own for a while. We collected the packets created by the virus and sent them to the vendor. They quickly realized how nastiness of this virus and dispatched an engineer to assist in remediation efforts. He arrived the following day. In the mean time, the virus was able to deduce that it was being thwarted by our efforts and immediately phoned home for instructions. 

At this point, the virus mutated and we were now fighting two strains. We closed off the virus’s command and control link (port 80 for you geeks) and continued to remediate. After 24 hours, the vendor programmed the patch and eradication efforts accelerated. We realized at this point many of our newer PCs were not managed by the host virus protection software hub. They had virus protection, but it was out of date and could not be updated remotely. These devices (approximately 1,000 devices or 20 percent of total inventory) had to be identified, knocked off the network, and manually remediated. It took 20 minutes per device, so you can do the math.  We also had to contact all laptop users since many of those devices could have outdated virus protection. We set up a depot for laptop users to drop off and pick up. It was a very manual process. 

It took us a couple of weeks of concerted effort before we were out of the woods. I was up for 42 hours straight at one point and totally forgot what day it was and many of the names of my team. Fortunately, I didn’t have to drive home. One of our team members had just started that week (of course we blamed him). I found out later that during a break, he walked around the building, phoned his wife, and told her not to sell the house. Fortunately for us, she did, and he now oversees our infrastructure team. We heard a few weeks later that another healthcare facility contracted the same virus but did not discover it for a week. It took them over a month to eradicate the bug and they ended up in breach notification land.

From a lessons learned perspective, we started with our virus protection. We made sure that every device was being managed by the central server and updates going out daily to all devices. We also deployed Malwarebytes to all devices as a secondary precaution. We accelerated our recruitment of a CISO and centralized our security team dedicated to protecting our assets. As of today we have implemented many of technologies needed in a strong security program. Under the leadership of the CISO, we have encrypted all mobile devices, e-mail, and flash media. We have implemented a Security Information and Event Manager (SIEM) tool, Data Loss Protection (DLP), and soon will have an Intrusion Detection System (IDS). We have a top notch security company on retainer. They also perform audits, safe harbor workshops, penetration testing, assist in remediation efforts, staff education, and assist us in staying up to date on any HITECH security updates. Besides a solid security program, we assume a breach is inevitable and have prepared in advance. 

For my colleagues I understand the cost associated with this type of program can be daunting both in capital and operating. Outsourcing should be considered for some of the areas (e.g. SIEM) to reduce cost. One of the reasons we are seeing so many breaches is based on the costs associated with implementing a solid security program, especially at smaller organizations. It’s tough to get the program through the budget process. It’s akin to waiting to see how many accidents you have at an intersection before a traffic light is installed. Usually it takes a fatal one. My suggestion to colleagues is to walk leadership through a mock breach event using real examples. I used an article from a local newspaper in California. The hospital explained the breach and what they were doing about it. In the article comment section, a reader wrote, "How can you take good care of me when you can’t take care of my health information? Ouch! Also, besides the fine and ending up on HHS website, the CEO typically apologizes to the community. That usually gets his or her attention.

Sorry for the long-winded response, but it is an area of interest and fascination for me.


Two stories. Our clinic system, located at the vendor’s data center, would automatically forward reports to key individuals on a daily basis. These were primarily statistical reports. Using the same approach, reports were designed to include patient information (today’s schedule, etc.) While this was "known," what wasn’t known was that the e-mail path from the data center to the clinics changed e-mail domains, which meant that the reports were being sent unencrypted across the public domain. The resolution was fairly simple, but it came as a fairly big surprise to us.

Confidential data (a little of which was PHI) being on a phone that a disgruntled employee was slow in returning. Exposure was unknown (likely known), but it caused a change in our approach on how personal phone (vs. vendor provided) should be used.


We have not yet experienced a data breach. We did, however, experience a recent virus attack. First one of any significance for this organization. Lots of lessons learned in terms of adequacy of backups and response plan. Overall not a bad experience, though we have many things to correct.


To date, we have not experienced a data breach, but have been trying to learn from the lessons of other healthcare organizations that have in order to avoid their mistakes. Toward that end, we have had improvements in physical security and made strong efforts to assure device and portable media encryption.




HIStalk Featured Sponsors

     







Text Ads


RECENT COMMENTS

  1. If you're referring to the first screenshot about Sen. Klobuchar's request to the DOJ, you can click on the picture…

  2. It is a screenshot. In some cases they are useful, in others - not so much. In either case we…

  3. The boxes of unreadable tiny font are just annoying. If you want to quote those sentence why do you present…

  4. HIMSS was good this year, improved layout and I enjoyed the sessions for CIOs. Keynotes were not high calibre. When…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.