The HIStalk Advisory Panel is a group of hospital CIOs, hospital CMIOs, practicing physicians, and a few vendor executives who have volunteered to provide their thoughts on topical industry issues. I’ll seek their input every month or so on an important news developments and also ask the non-vendor members about their recent experience with vendors. E-mail me to suggest an issue for their consideration.
If you work for a hospital or practice, you are welcome to join the panel. I am grateful to the HIStalk Advisory Panel members for their help in making HIStalk better.
This month’s questions involve actions taken in response to news of the recent hacking of Community Health Systems via the Heartbleed exploit.
What new actions or security reviews has news of the CHS breach caused in your organization?
I have been asked to have a penetration test performed on our network by our COO. This level of attention is unprecedented. I owe the folks at CHS a thank you gift for raising awareness amongst the rest of our executive team.
Asked my management team to review our systems again. I’m not positive the networking group reviewed their systems in April. I am now.
It’s a reminder that we must constantly scan our environment for vulnerabilities and remediate every exposure. We have decommissioned some hardware as a result of our Heartbleed assessment.
We reviewed our current IE based connectivity i.e, Cisco (far better than Juniper).
[from a vendor member] As a result of recent breaches such as Community and Sony, we are setting up IDS — intrusion detection — for our production environment. We are now getting daily reports on access activity from our prod environment, paying very close attention to foreign access attempts. We are also turning up our white hat vulnerability scanning of our code base before deploying to production. White hat is also doing proactive vulnerability testing in our prod environment. SQL injection, xsite scripting vulnerabilities are specifically targeted. We are doing everything possible to be proactive to protect all client data under our care.
Gather details on the CHS breach. Ensure that we don’t have the same exposure. My understanding was the the Heartbleed vulnerability was unpatched on a VPN device (vendor omitted) and the device was configured for single-factor authentication only. From there, the attacker leveraged a known trojan backdoor to gain remote access to unpatched / unprotected Windows machines.
The news of the latest breach pretty much is part of the background noise since there is a breach every couple of days.
We are implementing a data loss prevention product to help mitigate the risks.
No new technology, but increased education for our staff to remind them that security involves all users. We also presented our information security plan to our board, which met this week.
New actions, none. We had done sweeps using scripts to detect the Heartbleed SSL on our publicly-facing systems. We already have active security sweeps that detect Heartbleed vulnerabilities as well as any exploitation attempts.
We are re-evaluating our ability to detect large outbound data flows.
It actually happened at a good time. We were in the midst of our annual security audit when the news broke. We had just received initial results which showed our security posture. Tying the breach to our posture and presenting to executive leadership and the board gave our security program immediate credibility.
We have been reviewing our policies for vendor-managed systems and will be setting a revised set of standards for all vendors to follow irrespective of whether they like it or not. We culturally and procedurally need to move away from the mentality of, “This is vendor managed, so we don’t touch it.”
No new actions or reviews. Has led to heightened organizational awareness.
No changes. We are already monitored by a third-party vendor and have security set around our perimeter.
Review of all access privileges and more limited access to some previously given more global access. Creating more steps for some who have global access because we are asked to do things others used to do when they had access to the data.
We have not changed anything since the CHS attack. We have not performed anything in addition to our current IT security assessment, which coincidentally is running right now.
[from a vendor member] No new actions. We are already pretty paranoid. As a vendor organization with large payer and provider data sets, we’d be in big trouble if we breached.
We have re-examined our approach to Heartbleed, but recognize that all of our best efforts are sometimes not enough. We focused on remediation, but also on response should we have a problem.
Initial reports suggest that the Heartbleed exploit was involved. Are you confident that your network equipment software has been updated?
I am as confident as reasonably possible. We have outsourced most of our security monitoring to a third-party service and they have scanned and validated we are secure.
Yes. (two responses)
We are confident that our actions have corrected identified issues. This seems to be a “known unknowns” kind of situation where we know about some system components not managed by us that could be vulnerable. Vulnerability scanning continues.
Yes. We scan with Qualys monthly and before any new infrastructure is put onto the PRD network.
Yes. We have the same Juniper SSL VPN and applied the update soon after the exploit was identified.
When the Heartbleed exploit was publicized, we reviewed all our existing infrastructure and patched what we could. We continue to work with vendors to ensure that all needed patches have been installed.
Public Internet facing, yes, we are protected. There are a number of free or custom scripted scanning engines to verify. We’ve done that with QualysGuard on the big-name side, custom scripts on our security team, and finally by pushing as many things though our F5 load balancer that was not as effected on the SSL off-loading side. Internally there are ton of HTTPS/SSL security administration pages that need updates still, this many months on.
We initiated a remediation effort as soon as news of the Heartbleed vulnerability went public. While we feel pretty confident we have addressed the know vulnerability, we remain vigilant for suspicious activity.
We ran a test that showed that we only had one Heartbleed exposure, on a semi-retired system, which we fixed.
Not fully as we are completing our assessment, but believe our plans will largely address this.
Confident yes. Certain, no.
I hope so:) not confident.
I am never confident that we have covered every possible point keeping software up to date. There is always a chance we have missed something that will expose us to an exploit. Not that we accept vulnerabilities, but we are realistic about what we can and cannot protect.
[from a vendor member] We are pretty confident our network is up to date. It is amazing as a recently founded company (less than five years) with a hosted "cloud" model the amount of equipment in our office is down to laptops and a switch, one server for hardware experiments that is not hosting live data. Everything else is hosted and easy to control and evaluate. That is underappreciated in its effects in your efficiency and margins as an organization.
One of our staff reads Finnish blogs and we found out early. The patch was installed quickly.
We think so, but have chosen to take a more comprehensive look.
Would your network monitoring procedures detect unusual user behavior or large data transfers?z
We are missing some components of a perimeter security solution (IDS/IPS for one). This event has escalated the discussion and we are now pursuing the purchase of products and services to fill in a few gaps.
Probably not. Our logs are so voluminous we can’t find the needles that are in the haystacks, let alone tie needles from multiple haystacks together.
Yes. We use intrusion detection and other monitoring techniques and have a 24×7 monitoring team to support detection.
Not really, but large data transfer is generally inhibited or not allowed.
Yes. DLP would detect/block any abnormalities at egress through the internet proxy.
No. We have to implement our data loss prevention solution before we can detect those.
We recently installed a new product from our core security vendor that looks unusual traffic on our network and has the ability to block traffic or workstation when it see something unusual. We feel this new system will be critical in responding events where no known malware or virus has been published.
[from a vendor member] We hope so. Our tests have picked up this kind of behavior, but frankly I’m always impressed at the ingenuity of software developers. It is what we pay them for, but since they could write the rules for those tests, they usually have insight into how someone might take a shortcut.
Yes. We a security analytics platform based on real-time logs and network capture. There are a number of custom “content” detection methods we have on that solution. We detect abnormally large SSL handshakes, for example, an indicator of someone attempting to grab a full 256-bit data response from a vulnerable OpenSSL installation. When it comes to data exfiltration, we have the same security analytics platform plus a DLP platform, security operations center (SOC) rule sets, web filtering rules that would detect large transfers, and your general network operations center (NOC) monitoring.
We believe they do. However, continuing to re-evaluate and test our ability to detect large outbound data flows.
Yes. Firewall alerts show large transfers. Geoblocking rules stop any transmissions to non-US IP addresses.
Not completely as it currently stands.We are presently executing upon a set of strategies will address this and other matters in the coming months.
Likely only very significant or large-scale activity.
Yes, we have checks and balances in place.
We have tools in place to detect abnormalities. However, we have not tested for this scenario … yet.
We have mechanisms for detecting unusual user behavior and our software blocks large data transfers (Outlook). Anything more sophisticated than than that would not be seen. The traffic (network) software requires human monitoring to be useful and we are short-staffed in that area.
Yes, I believe so. We have invested in tools and technologies, but in many ways, It just means we might detect something a bit more quickly than we might have otherwise detected. Not truly about prevention — just detection.
What ONE recommendation would you offer to a hospital trying to assess or improve its security against cyberattacks?
If you’re a small to mid-size healthcare organization, hire qualified professionals to evaluate, plan, and implement a full security program.
You can’t have one. Cyber security is multiple layers of different locks with keys held by multiple people.
Address identified vulnerabilities without delay.
Have a robust Intrusion Detection System – we use McKesson as our ISP.
Diligence. More specifically, scan, patch, repeat. Strong password policies and two-factor authentication.
Tools are available. Look at the products in that space and select and implement. It will take a senior-level network resource to do it right.
Multi-layered security infrastructure and lots of training for staff.
[from a vendor member] Cloud vendors are probably more secure and less likely to breach their data, which doesn’t seem to make sense until you really examine the required data flows and architectural components. And watch those appliances and browser plugins, but I’m sure they are ahead of those issues already.
Hire a SOC or some other Managed Security Service (MSS) based off a security solution that uses both log sources as well as network capture. If that is too much $$ for the analytics solution, at least hire a managed/outsourced SOC to watch your firewall/public Internet device logs. If a hospital can’t spend ~$10-30k per year to fund watching the front door, there are many other ways to breach that organization.
Ensure firewalls are secure and these firewalls are sensitive enough for certain levels of attack and then immediately be informed of the attack to those who need to know.
Take these threats seriously and prepare. Many in our healthcare industry seem to feel that these things only happen to financial institutions or commercial organizations. We’re the new target and, unfortunately, I think we’ll see more of these large breaches before healthcare finally takes security seriously.
Take it seriously. Now even small hospitals are a target. You cant follow "security by obscurity" any more.
Use common sense. When it’s been announced in every major public media source that there is a bug in the software that health systems use that leaves them vulnerable to data breaches, they should fix the bug immediately. We still regularly hear about unencrypted laptops being stolen. I wonder how many health systems there are out there that still haven’t fixed the Heartbleed bug and won’t until they have a breach?
Invest in security in your org and engage the people to have heightened awareness of security risks. Bad things will happen; the bad guys have more money, more resources, and more time than many of us. It is important to know how to reduce exposure and be prepared for the bad events. In many ways, it is like the principles of a High Reliability Organization, ideas promoted by Drs. Weick and Sutcliffe.
- Be preoccupied by failure. Focus on what could go wrong.
- Be reluctant to simplify interpretations. Don’t jump to simple conclusions – try to understand the situation.
- Sensitivity to operations. Respect the folks close to the problem; they may be able to help you detect that something is going wrong.
- Commitment to resilience. Be prepared to bounce back; don’t give up.
- Deference to expertise. Engage the experts
We have dedicated software, not hardware, for DDOS attacks, but those are pretty obvious when they are happening. Far and away it is the human factor, phishing, that is the danger, perhaps even more so from the IT department who considers themselves immune to this type of attack. I bet they are are just as gullible as every other user.
Install an IPS. It is amazing to see what how many times a day you are scanned and/or attacked. The right technology will allow you to “see” the activity and defend against attack.
Use an outside firm that has expertise in this area to do an annual assessment and also perform white hat hacking. You will be amazed at what is discovered and how this information can help position the organization to be as prepared as reasonably possible against attacks.
I would love to believe that ONE recommendation would address our reality. This space is one of the most underrated in terms of complexity, cost, and risk. We have spent the past 18 months going through an exhaustive planning and education process to thoroughly assess where we are and where we need to be. There are technical parts for sure which need to be understood and addressed. These are the easiest to deal with because they are, by definition, known. The issue is, how to you reconcile an organization’s risk tolerance against a growing uncertain threat? This is not an easy topic to get organization leaders’ heads around. Take the recent situation at Children’s of Boston. Did any of us actually believe we providers would be the victim of an attack from a sympathetic group involving the care of a very tragic patient care situation?
We live in a different world at a very different time. We providers are all under a significant amount of pressure as we deal with all of what is happening in our space. I believe most of us have been making “best reasonable efforts” to do the right thing and safeguard the information which we need to be responsible for. We also need to invest in a wide variety of enablers to transform ourselves into what we believe is important. Everyone is becoming more sensitive as most people know that no one is immune to this threat and it’s just a matter of time. Unfortunately, it’s difficult to make the necessary investments to mitigate against most if not all of the threats given the economic pressures that we are all under. Interesting topic in very interesting times.
I've spent some time at the front of the classroom, but I've spent much more time in the lab studying…