Home » Interviews » Currently Reading:

HIStalk Interviews Robert Lord, CEO, Protenus

November 16, 2016 Interviews No Comments

Robert Lord is co-founder and CEO of Protenus of Baltimore, MD.


Tell me about yourself and the company.

My co-founder Nick Culberston and I started Protenus when we were both in medical school. We started in response to a big problem that we saw in healthcare, which was that with the rollout of electronic health records, there has not been an effective attendant rollout of privacy and security measures to protect that data, particularly from an insider threat prospective.

Nick and I had backgrounds before healthcare as well. He was a Special Forces operator for the US Army. I was a quant at a hedge fund. We had seen a very different way of tackling the problem of insider threats, protecting VIPs, co-workers, all of those individuals from abuse of their PHI. We built a platform that could shift the paradigm of how we protect patient privacy.

What insider threats are you seeing and how prevalent are those compared to high-profile cyberhacking incidents such as ransomware and phishing?

From our own tracking and independent research, we see that a pretty consistent 40 percent of incidents are linked to the insider threat versus the external hack. While we think that there’s a lot of good work that’s been done on the external side — you see a lot of development in the space — there’s a lot less thoughtful work that’s been done when it comes to insiders, particularly in a healthcare-focused way. That’s been a big challenge.

Healthcare has a huge number of idiosyncrasies and challenges that are unique to the industry. It requires a deep understanding of the workflows and special challenges that the healthcare providers have, like the need for open access to records, the fact that individuals can have irregular workflows and patterns of activity, and the fact that there are huge amounts of data streaming through all of these systems and often in ways that are difficult to understand how they relate. It  takes a different approach, one that can integrate big data techniques and machine learning to get a better handle on this challenge.

Is there a higher likelihood of reputation-damaging behavior from insiders rather than outsiders since the person responsible was given explicit trust as an employee, doctor, or business associate?

Charlie Ornstein of ProPublica did an interesting piece on this. The individual, one-on-one breaches do the most damage because they are more personal, more focused, and more likely to lead to liability and bad blood between the hospital and the affected party.

A big hack from an external actor — whether it’s a foreign government or an individual — or an exposure of a database online can affect a huge number of patients. However, the most acrimonious and reputation-damaging incidents are these insider threats. It’s not just a theoretical exposure, but someone intentionally doing something with patient information, and patients react differently to that. When it’s that close to home, it hurts in a different way.

We know in healthcare that these systems are terrifyingly insecure and vulnerable because of the generally open access architecture, but a lot of patients don’t really appreciate that fact. They’re flabbergasted when they see this type of insider threat from someone in the circle of trust for that hospital.

That’s the big challenge. All of this is a question of trust. If patients start to lose that trust, if we have a crisis of that trust, then what are the implications for the larger system? Hospitals understand that at some level, but we don’t always see the attendant investments and awareness, sometimes at the C-suite level. There’s a lot of reasons for that, but it’s an interesting question that we’re going to have to tackle, both at the individual institution level as well as at the federal government level as they think through mandates of how to improve these systems.

What are some examples of issues your system has detected?

Obviously to protect our clients we can’t go too much into specifics, but the types of things that you see typically in this space include the classic co-worker breach, where individuals look at each other’s records inappropriately. It can be the VIP breach, where you’ve got a big movie star coming into your hospital and suddenly it seems like everyone wants to go check out their face sheet. 

Unfortunately, we’re also seeing the rise of criminal actors and criminal networks acting inside electronic health records, whether that’s directly having someone in there who is stealing records and diverting them to the black market or if it’s bribing individuals to divert those records to the black market. That has happened for as a little as $150 per record.

Obviously these are some pretty scary vulnerabilities. We’re seeing more and more of it. Then there’s the whole question of what happens to the records afterwards. They can be used for a terrifying array of threats, whether that’s identity theft, medication fraud, Medicare and Medicaid fraud, medical blackmail, or traditional identity theft types of operations.

Does every industry have the same insider threat problem or is it caused specifically in healthcare by insufficiently granular access?

Healthcare unfortunately suffers from a bit of a double whammy. On one side, the information within healthcare is some of the most valuable information that you have. I’m a member of the Institute for Critical Infrastructure Technology and we just released a report on the incredible value of electronic health records on the Dark Web. While there’s a lot of variability, the bottom line is that there are incentives because these are very valuable records.

On the other side, hospitals are pulled in a lot of directions. Those directions don’t necessarily include privacy and security when it comes time to budget. You got so many competing demands for rolling out new electronic health records and associated systems, different informatics  programs, obviously you’ve got the Meaningful Use incentive programs and MACRA. What you’re seeing is hospitals saying, I’ve got to do all of these different things and I’m not really sure where to put privacy and security on the roadmap, But simultaneously, if you don’t put those on the roadmap, in the long run you’re going to degrade the trust that allows those other programs to be successful.

Hospitals are caught in a tough situation right now. Health systems in general are trying to navigate those waters as effectively as they can, but it’s quite difficult. That’s what is leading to these breaches, in addition to those open architectures, the ease at which people can access this data, and the historical lack of technologies in this field to detect and thwart these types of threats.

What kind of normal user behavior does the system learn in being able to identify exceptions?

We take information from the EHR record and from the patient record, then weave it together with access logs, metadata, and a lot of other information that allows us to understand the second-by-second pattern of every single user in the electronic health record. By doing this, we can detect threats that go outside the traditional rule-based paradigms.

It’s never just one thing. It’s usually an entire constellation of things. The types of patients they’re treating, the types of actions they’re taking, the manner in which they’re moving through the medical record, and the amount of time they’re spending in it. Everything from the very simple to the extraordinarily complex.

With a big data platform that uses some of the best in machine learning and artificial intelligence and a lot of the advances that have come out there recently, we’ve built this ensemble anomaly detection system that incorporates a lot of different perspectives. Not just a single type of scenario, but a lot of different ones. We’re able to find everything from the simple types of threats, such as co-workers or family members looking at each other, all the way to extremely complex threats that we wouldn’t really have a name for, but as soon as you see it, you realize this is extraordinarily bad. The type of actor who might during the day have appropriate access to a certain department, but in the evening, on a particular workstation, or when looking up a particular subset of patients, their actions are inappropriate. It’s a subtle difference that won’t be caught by more basic analytics.

What kind of integration is required to put together the package of information that allows you to make that detection?

Our team has a lot of experience in the big data space, data integration, and doing this type of at-scale analysis. We’re investing heavily in our ability to do data integration easily. What we ended up building was a platform that could ingest data from any number of sources and be source agnostic, both in the number of sources as well as type of source. We then can push everything up to a more universal data schema and analyze from that layer. That way we avoid a lot of the laborious integration that often happens with other systems. There have been a lot of advances in technology that have allowed us to look at the data more from a first principle standpoint and then figure out exactly the elements that we need on a dynamic basis, instead of a highly manual and specified basis.

Do you have any final thoughts?

Healthcare is fundamentally facing a crisis in trust in our systems. We’re increasing the amount of data we collect. We’re increasing the analytics that we’re performing. We’re increasing interoperability. We need all these things to deliver the promise of better care, better patient satisfaction, and decreased cost. In no way do we want to stand in the way of all of this great data-sharing.

Simultaneously, if we can’t build that trust in the system, if we can’t establish a new paradigm for how we’re going to protect all this data and make sure people are accessing data appropriately, then we’re going to lose all of these benefits in the long run. 

As both privacy and security wonks as well as data scientists, we’re really excited here at Protenus about being able to push forward those advances in data science when it comes to privacy and security, just as they’re being pushed forward in improving patient care. I think that’s a big trend that we’re seeing and something we’re very hopeful about. 

While we think that in the immediate future things are probably going to get a little bit worse, in the long run, we’re going to have a much better system. Maybe even better than those in other industries, because healthcare is going to be tackling the hard problems first.

HIStalk Featured Sponsors


Text Ads

Recent Comments

  1. Carle Health + HealthCatalyst: We keep hearing from experts that the way to improve healthcare operations is to make sure…

  2. Speaking of content ideas, with the 20th anniversary of HIStalk coming up, I think it would be fun to revisit/skewer…

  3. Probably not a great way to make friends with your sponsors :) but have you considered doing a year in…

  4. UCSF integrating oral and overall health records is a huge step that should not be understated! Given all the focus…


Founding Sponsors


Platinum Sponsors





















































Gold Sponsors











Sponsor Quick Links