HIStalk Interviews Kurt Garbe, CEO, IMAT Solutions
Kurt Garbe is CEO of IMAT Solutions of Orem, UT.
Tell me about yourself and the company.
IMAT Solutions solves the core data problems of healthcare companies. We focus on how to improve data quality, data currency, the amount of data, and the type of data that companies can look at.
How do you position the company among competitors?
Many companies look at different parts of data — analysis, cleanup, or integration. We take a more comprehensive approach. This is a data platform. What are the requirements for the different types of data you’re trying to bring in, the comprehensive data? How do you look at cleaning up the data that’s coming in? How do you look at the currency? How do you make sure you can quickly access that data in a comprehensive way? We look at all of those components, not just some individual pieces and parts.
How would you assess healthcare in terms of your C3 framework of data that is clean, comprehensive, and current?
Healthcare is still, unfortunately, at the early stage. We know this from talking to our customers. It’s across the board. Different companies have different strengths and focus on different things, but we haven’t found a lot of evidence that people have taken the full picture and made a lot of progress.
Are healthcare organizations making decisions using data that is either bad or incomplete?
Absolutely. The core question is, what data are we even talking about? The data related to healthcare and the health of an individual includes a lot of free-text data, unstructured data from lab reports, notes, and so forth. When we talk to people through surveys and discussions, 80 percent aren’t looking at that data yet. They don’t apply natural language processing to figure out what insights they could get from that data.
It’s the old story about the elephant. We look at data as this big elephant. Some people look at data as just the foot or the trunk. They’re only looking at the pieces and parts. They don’t usually say their data is good — they admit it’s a challenge, something they’re looking at, or the subject of some new initiatives. We don’t find a lot of complacency and satisfaction.
It gets more complicated where a health system has several groups. Each says they have clean data, and they probably do to a great extent, but the data is not coordinated. How they describe their data and how this other group describes their data are not consistent. It’s therefore not particularly useful in having a real impact.
What due diligence is required before accepting a new source of data to understand its semantics rather than just finding matching columns that can be joined to create a bigger database?
I wish we identified some rules of the road out there. This is a major effort and a major problem. Like everyone in data and healthcare, they’re doing the best they can. Often they’re just prioritizing. They are saying, we can’t absorb all the data, but can you give us the following type of data so we can work on that first? Let’s cut the problem into small pieces.
That’s a practical approach that works, but it takes a long time. They are often disappointed with the impact of those efforts. You get the greatest impact when you’re using the largest amount of data to make decisions.
Will artificial intelligence and machine learning help solve the problem?
We’re in an unfortunate race. People talk a lot about AI and machine learning. But with these systems, as much as they’re making great progress in AI and machine learning, the inputs — unstructured and free-form data — are still weak. An AI engine or machine learning algorithm can’t necessarily turn it into something meaningful and useful.
Years ago, everyone was talking about predictive analytics. We have these great models, but the source data isn’t very good. You’re trying to do more analytics and use more of these advanced tools on poor data to get to that answer faster, as opposed to getting a better answer. People still have to spend a lot of effort to to turn unstructured data into something useful and meaningful that a predictive analytics engine, AI algorithm, or machine learning can do something with.
The challenge, and it’s a big one, is that the unstructured data multiplies the amount of data you have by a factor of five or 10. It’s 10 times more than you used to have, so how do you get meaningful results from it in a meaningful time frame? If it takes a week to process through all that data every time you run a report, create a model, or do some analytics, you’re not going to do it often. That’s why we talk about the currency, meaning how quickly you can get insight out of all of this data that you have.
That’s why we talk about the C3. It’s not just the fact that you have comprehensive data. You’ve got all of your data in an unstructured form, and through an NLP process or even manually, you’ve cleaned it up. It’s consistent, it works well. But now, how do you get results out of that in some meaningful time frame, where you can run reports, look at the reports, and say what works, what doesn’t work, or look at these fields instead? You’re now interacting with the data. That’s where this third C of currency comes in. That’s the only way you get high impact from whatever tools you have, whether it is predictive analytics, AI algorithms, or machine learning.
What lessons did you learn from connecting the aggregated datasets of two HIEs together after Hurricane Florence and validating that the result was accurate at a patient level?
The historical approach to interoperability or interconnecting data is to tell Company A, “Here is how we want you to give us output.” That’s historically a huge problem. Company A doesn’t have the time or they don’t see the value of doing that. Our approach is, just give us what you have. We won’t ask you to change your formats, your fields, or anything else. You give us what you have, this other organization does the same, and we’ll re-index that data and provide one comprehensive view.
The major lesson that we’ve learned in integrating new clinics and new hospital groups into these data pools is that we have to lower the bar of what they have to do. We’re not asking them to change their format, because those IT discussions are often where interoperability gets bogged down, where you ask people to change what they do. We don’t do that. Just provide us what you have and we will make it work for you.
How do you see the company and the general areas of data interchange, quality, and interoperability changing in the next five years?
Our aspiration, and the hope that we have for healthcare, is that tools such as AI, machine learning, and predictive analytics can help deliver real results now. We need to raise a bar on the baseline of getting comprehensive data, making it current so it can be analyzed in real time, and making sure it’s clean, consistent, and makes sense.
If we can get to that baseline, those other tools will get you what you want in healthcare — bending the cost curve, improving outcomes. Without that, we’re still in some ways guessing. If we can address the core data issues, those tools, as well as others that we can’t envision today, can help us make decisions on what it actually happening instead of guessing, which is what’s happening now in healthcare.
Do you have any final thoughts?
The topic of improving healthcare through data is not new. It has been envisioned, talked about, and hoped for for 20-plus years, if not longer. What is exciting now is that the technology, the ability to actually get there, has caught up to that vision. We look forward to helping make this vision come true.
I think you're referring to this: https://www.wired.com/2015/03/how-technology-led-a-hospital-to-give-a-patient-38-times-his-dosage/ It's a fascinating example of the swiss cheese effect, and should be required…