Jay Savaiano is director of worldwide healthcare business development for CommVault.
Tell me about yourself and the company.
CommVault is a data and information management company. Not only do we move data for purposes of data management for backup recovery, archive duplication, replication, disaster recovery, and continuity planning, at the same time, we take an index of all the associated content within that data so it can be used for e-discovery, information management, legal situations, any other reason that you feel that you would want to search against all your data in your environment from an unstructured perspective.
You hear a lot about big data, but hospitals already create a lot of data they don’t use. What examples have you seen where a hospital found something useful in what they already had?
The big part of it is the unstructured data. It’s easy enough when you have a solution that has a database back end and you want to search for “Jay Savaiano.” You can search for “Jay Savaiano” and you can get all the data from that solution. What you don’t get is the unstructured data that resides in your environment — Excel spreadsheets, Word documents, PDFs, all this content that’s residing out there that you don’t have an actual database that indexes all the pieces.
That’s the challenge any time we bring up dark data to the healthcare space. We have a ton of it and we don’t know what’s in it. It’s what scares us the most.
We understand exactly what’s in the EMR. We can search that all day and get all the data points we needed. It’s all those unknowns that are the scary part, especially all the way out to the edge, all the way out onto a laptop. Not knowing what type of documents that any one particular individual has — whether it’s a doc or whether you’re in a hospice scenario — data resides on those edge devices that can be snagged.
That’s always everyone’s concern, even from a perspective of what’s in email. How many times is a doc or an individual taking something, attaching it to an email, and sending it to their Yahoo account? Those are always the concerns of not knowing if PHI is all of a sudden traversing outside of the network, going to unsecured areas that leave them open to issues.
An examples would be outright broad compliance. To tap into that unstructured content, especially even in an email scenario. A situation such as an employee who is also a patient. That individual has taken data from the organization, emailed it outside to themselves to their Gmail account, and now all of a sudden there’s an issue from an employment perspective with that individual. They’re trying to come back and say they weren’t treated fairly or anything that could have been brought up. We’ve had organizations that have gone back and done searches against all the email content as well as the unstructured content that resided on their laptop that was on the edge and be able to understand all the data points of possibly where they opened up the organization to step outside of compliance and not be properly managed across the board.
The flow of what we do is taking data and ingesting it and moving it to support the needs of data management. At the same time, fully indexing the content to be able to make sure it’s queryable and searchable is one fluid component of having a complete data and information management strategy. Not just in the data center, but all the way out to those edge devices.
The examples that you’re giving are mostly from a compliance and regulatory retrieval type aspect. Is that the focus of the product or are there other elements such as clinical data searching?
It is more of a compliance-based play. It’s backup and recovery at its core. You have a data management component, but at the same time, organizations have to buy two or three other products to do the indexing of their email so they can turn around and do e-discovery and search. Or they have to buy another product to take and ingest any of the unstructured data that sits out there in shared directories or in private directories. Then they have to buy another product to turn around and support the laptop backup, the edge-based backup, or going all the way out to the iPhone for that matter. They sign up with a variety of different products.
Our whole strategy is one common architecture. It is just that core product. We’ve expanded the technology to be able to support cross platforms into all those areas. We go into environments and we’ll work with their data from a virtualization perspective as well as from a physical perspective. We’ll support their Epic environment, their McKesson environment, and their Allscripts environment, but you can minimize the silos that comes with a number of those solutions from a data perspective.
Too many times McKesson or Allscripts has made recommendations that our organization should buy X hardware and install it and it needs to run on this virtualization platform and run it. You get these data silos that, from an operational aspect, become very challenging in healthcare, which is already challenging when you have hundreds of clinical applications that IT is trying to support. We just minimize that overhead on the back end.
What CIO mindset has to change when they start thinking about using the cloud?
Too many times when organizations are utilizing the cloud, they’re looking at it as just a target for data. Whether they’re going to push data to it to support a disaster recovery scenario or they’re looking to utilize cloud-based solutions so edge-based components can connect. A clinician can push data up from their laptop and access it on their iPhone or access it on their iPad and get other content. This is with unstructured data once again. All the structured data. Everybody has applications. Epic has applications for the phones and all those devices.
Too many times you have cloud vendors that have to create another silo of data. In order to get the data up into those cloud vendors, you’re creating some sort of replicated copy that pushes that data up to the cloud and doesn’t work fluidly with the existing data policies of what you’re trying to do. In a scenario of, you’re trying to archive content off, and before you actually archive the content out of whatever that content might be, you create another copy of it. That copy then gets pushed out to the cloud as opposed to just tiering that content at its root of what you’re looking to send out and putting that other copy fully out in the environment. Not creating multiple sets, multiple copies in the data center local as well as pushing a replicated copy out to the cloud. You want the application to be able to bring it back in fluidly and not just have another copy that’s residing out there.
What will the health system data center of the future look like?
It will have a lot of cloud components. That’s evident by a lot of the solutions are evolving more and more into SaaS-based models. Software as a service is pretty consistent in this space more and more, especially within the EMR space. I don’t ever see that there will be a limitation in the fact of the clinical applications and how they continue to grow. It’s not that you can ever run into just having a server shop. There’s always going to be, in any of the ‘ologies, specialization. Associated with specialization comes specific applications to support the clinical needs of those particular ‘ologies. There’s still to this day constantly new apps that are created that are a little more specific, that are a little more detailed, that a cardiology department would prefer to have in their environment that some of the larger entities won’t be able to take on. It will always be a dynamic growth. There will always continue to be multiple applications.
Infrastructure is back on the strategic list for health systems because of big data needs, system breaches, and mobile workforce requirements. How are CIO’s responding to those needs?
With a lot of those challenges around the infrastructure, organizations are trying to play catch-up. They are challenged. That’s why simplification of the application set is always a positive piece. That’s why people are interested in talking with us and what we do because it is simplification. It is not adding multiple layers to do an operational component. They have enough complexities with the clinical applications and the dynamics of what those pieces need. To add to that mix with an overly complex infrastructure with operational tools that run on top of that infrastructure only exacerbates the problem to the HIT organization that has to manage and operate the solutions to support the clinical environment.
It’s always top of mind and that’s why we’re having so much momentum and so much growth in the healthcare space, not just in the US, but globally as well. We have HIPAA but across the board. Everyone sees that as the direction that they need to manage and maintain from a compliance standpoint in their given country. The EU has their approach just as much as you have the compliance components that they’re attempting to do in South America. That has definitely driven some organizations to want to minimize the issues of operational and infrastructure, to start to simplify that, as opposed to making it more complex like the clinical applications continue to do.
Do you have any final thoughts?
We’re seeing a lot of points around the whole BYOD piece. The bigger concern becomes the BYOC component, the “bring your own cloud.” Everybody can sign up for a free 5 gig of their local provider. We’ve had a number of organizations that want to start to collapse that and start to bring that back inside. It has a lot to do with that compliance component of the unstructured data, because when you have any of those free 5 gigs, it is only unstructured data that usually gets pushed up into there. Spreadsheets with patient information, PHI residing in it, documents that are really more in the unstructured context. We’ve seen a lot of conversations that come up around that.
Another area is retention policies. The challenge with healthcare is there’s a variety of policies that are out there depending on the age of the patient and the retention of the data, but because the policies are so tiered and varied and they’re very specific to a patient, it becomes challenging to turn around and do anything when it comes to retention. With that, the retention policy for basically everybody we talk to seems to be forever. They don’t just have retention policies for the age of the patient or if the patient is deceased after a certain amount of time. This just complicates that data growth in the data center. That means data is never going to pare down –it’s only going to get bigger and larger. The data centers can only house so much. It comes back into that cloud message of how do you drive that one.
I work with the ISVs in the clinical app space. I work with the servers, the Epics, the Meditechs, all these organizations. I will say that the conversations have picked up more to the fact of understanding how to support retention and how to pare data off, where in the past, it was really the brute force approach of, " The data’s going to get bigger, so just throw more storage at it." Now the conversation has shifted to the fact of, "How can we truly start to minimize storage costs for our customers?"
We have more and more conversations that are, in business development, at a partnership level with those ISVs in the clinical app space as well, not only on the EMR but on the PACS space, to come up with an approach of, how do you truly start to let them pare data off? How can we have content-aware policies that aren’t just policies that you set against a date and say, "After three years, we’re going to push it over here?" Specifically, it’s three years old and it’s for a 40-year-old male who tore his ACL because we’ve haven’t seen that particular patient in 10 years — now we’re going to take and move that data.