Home » Interviews » Currently Reading:

HIStalk Interviews Charlie Harp, CEO, Clinical Architecture

April 12, 2021 Interviews No Comments

Charlie Harp is CEO of Clinical Architecture of Carmel, IN.


Tell me about yourself and the company.

I have been developing software in healthcare for a little over 30 years. I’ve worked for companies like SmithKline Beecham Clinical Labs, First Databank, Zynx Health, and Covance Central Labs. Back in 2007, I started Clinical Architecture to address what I thought was an unmet need in the healthcare industry, which was doing a good job of managing how information moves, how we deal with terminology, and how we deal with content. It’s designed to enhance the way we support patients in healthcare and look at information.

What are the challenges of using provider-generated data for operational improvement, benchmarking, analytics, and life sciences research?

There’s a handful of issues with the data that we collect in healthcare. If you talk about just standard structured data — and let’s even include unstructured data — one of the big challenges is that every single application in every single facility tends to be its own little silo of terminology. Code systems that are created in these places by the people who work in those places are usually local. They are not always following the best practices in terms of how they are described.

Public health organizations, large IDNs, or payers that go to collect all that information — even if it’s delivered in a standard container, like a CCDA or an HL7 transaction – experience semantic impedance. To be able to utilize all the disparate codes and put them into a common nomenclature or common normative terminology that you can do analytics and BI and all those things on, you’ve got to do work. You’ve got to introduce work to get the data from its original state into something you can use.

The other challenge we have is that if you look at the standards where we ask people to codify things with standard terminologies, not all mappings are created equal. You deal with that “whisper down the lane” effect with structured data, where they might have mapped it to a SNOMED code or an ICD-10 code for delivery through something like a CCDA or FHIR bundle, but there’s a certain amount of uncertainty baked into whether or not they broadened the term, they narrowed the term, or maybe somebody made a mistake and mapped to the wrong term. There is what I call uncalibrated uncertainty when it comes to the structured data.

The other problem we have is that between 60% and 83% of the data we know about a given patient from any place is bound up in unstructured notes. At the end of the day, what the provider relies on is their notes, not necessarily the structured data, because most of them realize that structured data has a lot of uncertainty in it.

What is the role of artificial intelligence in recognizing terminology problems faster and perhaps resolving them faster?

What we do is a form of deterministic artificial intelligence. We’ve trained our product over the last 10 years to understand certain clinical and administrative domains. When it gets a term like “malig neo of the LFT cornea,” our product parses that apart semantically and turns it into an expression — malignant neoplasm of the left cornea. We use that when we are doing things like mapping, so that we can do about 85% of the work.

If things are really terrible, and I’ve seen some really terrible things come through an interface, then obviously you have to pick up the phone. But in that scenario, what you’re dealing with is deterministic artificial intelligence, where a human being, a subject matter expert, has trained a piece of software to think like they do.

Machine learning is really pattern recognizers. They don’t set a course, they just observe something,. I always warn people that there’s a certain lemming effect of machine learning, where people could be doing a lot of wrong things and the machine learning doesn’t know right from wrong. It just knows patterns. When it comes to doing the transformation of data, the challenge is filling in the gaps of what’s not there. Most of the time when somebody’s struggling with mapping something, whether it’s a drug, lab, or condition, the core part of the struggle is there is something missing. There’s not enough information for them to determine where it should land in the target terminology.

Another challenge is that the terminologies that we use for standards are prescriptive. They are pre-coordinated. Somebody sits in a room, and they come up with a term like “Barton’s fracture of the left distal radius.” They say that, and that’s the term. Let’s say that you’re coming from ICD-10, you have Barton’s fracture of the left distal radius, and you’re mapping it to SNOMED. Let’s say that SNOMED doesn’t have laterality for Barton’s fracture. Most systems that we have today can’t handle post-coordination, where they can glue multiple things together and land it in the patient’s instance data. They have no choice but to choose a broader concept, so they choose Barton’s fracture and the other information left by the side of the road.

Even if we had the smartest artificial intelligence platform in the universe, you can’t map to something that doesn’t exist. The way we deal with structured data in terminologies today is that we use these single codes in our standards. If you can’t find an exact match, what do you do?

What are the risks of companies that assume that FHIR solves their interoperability problem only to find that terminology issues are creating incorrect or incomplete information?

FHIR is a great advancement, but it struggles with what a lot of standards struggle with — it’s a snapshot. We are evolving FHIR and we are using FHIR, but if you look at the old ASTM standard, HL7, FHIR, OMOP, or any of these canonical models, it’s good if we can have agreement that these are the elements we are going to share. When you ask me for a lab result, here’s a standard container that I can give to you. It’s less verbose in many ways than some of the things that we did in HL7, especially Version 3, but it does deliver things in a nice package. It’s good for us to have agreement in how we package things up.

The issue with terminology is a lot of these systems that we use in healthcare, in inpatient and in outpatient, have homespun terminologies. There is no way to get around doing this semantic interoperability. For a long time, we didn’t care, because we didn’t try to collect that data and use it in a longitudinal, analytical way.

FHIR is good. I wouldn’t get rid of FHIR. FHIR is a great advancement. It brings us to consensus on how we package things up, what things are important for a particular type of resource. The fact that people are excited about doing it and they are opening up some of these systems to share data in real-time ways that they never did before is pretty cool. But when I get a FHIR resource that describes a lab test, and it’s using the local lab code, problem ID, or drug code, it’s tough to map it to make sense of that data and do something good.

People coming from other industries say, why is it so hard in healthcare? A big part of it is the systems we built and the platforms we are in. That metaphor of fixing a 747 in flight is very true. You can’t go in and just rip the rug out from under a hospital system and expect that everything is going to be OK. It’s an incremental steppingstone of evolution to get where you need to go. People can suggest that we just get away from all these local terminologies, but that’s going to take a decade, easily. If we can get it done, it’s going to take a decade. We just need to have better solutions and better ways of dealing with this interoperability problem.

The other thing, when it comes to semantic interoperability, is that the onus is on the receiver. The people who are pushing data out have already used it. They are pushing it out to someone else because they have to, but they don’t have to suffer the consequences of it not being accurate or complete or not being coded perfectly. At that point, it’s out of their hands. The onus is always on the receiver of the data who wants to use it to make sure that it is usable.

I always request, when I’m doing some kind of a transaction, give me the original data, even if it’s not a standard. The original data is what the provider chose. It’s what the people said. I’m not going through some third party that picked the closest thing they could find in a list of standard terms. You can give me the standard term you think it is. That could help me a lot, because if they are right, I can use it just like that and I’m good to go. Having the original data eliminates some of that hearsay effect.

We have seen this with our product Symedical, where we have data, like say lab data. We saw a code of CA-125 come through Symedical and people mapped it mapped it to calcium. CA-125 is a cancer antigen test. It has nothing to do with calcium. Because Symedical looks at patterns, says, “CA-125 isn’t calcium. It’s a cancer antigen test.” We were able to fix that and put it in front of a human and say, “It came in as calcium, but this is what we think it is” and they were able to correct that. Those are the kinds of things we’re going to have to do.

A lot of people think that doing that mapping of data is a project, but in reality, that’s a lifestyle choice. It’s like mowing your lawn. You can’t just do it once and walk away. It requires somebody to be keeping an eye on that all the time, because the other thing that can happen is people can change a code. It doesn’t happen with the standards, typically, but it happens with proprietary code systems.

Our mission at Clinical Architecture is maximizing the effectiveness of healthcare. A lot of what we do when it comes to machine learning is not necessarily say, “This artificial intelligence will come in and replace what you do.” It’s really saying that this thing will do a lot of the heavy lifting. It will eliminate a majority of the work. But we never suggest that we can eliminate humans from the equation when we are talking about doing this semantic interpretation of what Human A created and what Human B created, because I create a code, it’s local, I have another person map it to a standard, and that standard comes into System B. The first thing that has to happen is the person in System B has to map it to their local code if they want to use it. 

That’s just point-to-point exchange. If I’m pulling data into an aggregation environment and trying to do some kind of analytics on it, it’s probably easier, because if I’m smart, I’ve probably chosen a standard and maybe extended that standard a little bit to accommodate the outliers. But it’s just one of those things where when we start utilizing longitudinal data from multiple sources, having mechanisms in place to look for things that are uncertain and allow me to rule them in and rule them out is going to be a pretty big deal. Also, looking at unstructured data for high-value information that I can use to improve that picture.

The other thing is using things like inferencing logic, where I can take the things that I know about the medical world and look for data that can’t be true and call it into question. I’m not a clinical person, so bear with me, but if I have a  patient who says they are a cardiac hypertroph and they have a procedure that says they have an ejection fraction of 25%, that can’t be true. There are situations it just can’t be true. If I have a patient who is on insulin and has a hemoglobin A1C of 7%, but there’s no mention in their structured medical data that they are diabetic, it might be in the note, but it might not be in the structured data.

We are trying to do things as we enter into this value-based, population health, analytics world. Look at the public health emergency we just dealt with in 2020. Being able to leverage that data in a meaningful, competent way is going to be critical as we continue to move healthcare forward.

Do you have concerns about drug companies aggregating de-identified EHR data from hundreds or thousands of hospitals and then making significant clinical or commercial decisions based on what they see?

Whether it’s the CDC looking at COVID or pharma looking at a particular situation or looking for cohorts to enter into a clinical trial, the first step is getting the structured data, taking whatever the original people entered into the system, and doing a good job of finding the best possible target. 

The other challenge you have is that because mapping is difficult, people don’t want to do it. Or they say, I’m only going to map the top 50, or I’m going to only map these three things I care about. You can’t really think about it that way, because the things that you are not mapping are a mystery to you. You have to try to map everything, even if you only care about 10 things. Mapping everything makes sure that those 10 things aren’t missing, because they could be if you don’t map everything. If you map everything, then at least you’ve got a picture of the data. 

If you have what originally came from the site, then you eliminate that third party that may have mapped it to a standard incorrectly. It’s good to have that data because it gives you hints at what they thought, but having the original data lets you analyze what the original thing said. Take my earlier example where you have Barton’s fracture of the left distal radius. I convert it to SNOMED, it’s Barton’s fracture and I’m going to land that in my data repository as Barton’s fracture. If I have the original term, let’s say terminology on my side has laterality and anatomic location, I can say, they said Barton’s fracture in SNOMED, but when I look at the semantic payload and the words that are in the original term, I’ve got the exact same thing in my database here as a term. It has a different code, but it says exactly the same thing. I can make sure that I’m not losing information in that transaction. Always try to get original data because you run the risk of terminological hearsay.

As a benefit of people who are aggregating data, as opposed to the old episodic way we dealt with healthcare, is that you get a probabilistic cloud of information about John Doe. When you get all that information, you could use machine learning or AI to help essentially reinforce things. It’s kind of like diagnosing a patient, I imagine. I’ve never done it, but you are looking at all this information and you are looking for things that corroborate or things that indicate that maybe this isn’t true. A lot of the time we just pull everything together and slam it into a list of problems and medications. We are still wrapping our heads around this whole notion of time in healthcare data. Healthcare comes from a very episodic place. We have never really sat down and looked at how should we look at longitudinal information when it comes to diseases, drugs, and labs, so that we can look for this flow of evidence that tells us what’s going on. When you start aggregating, it creates opportunities to do that.

We need to make sure that we are thinking about these problems of how we normalize information, how we look for information that’s missing, how we take information — not necessarily the big word salad output of NLP, but how we mine unstructured data — for things we really care about and make sure we’re integrating them into our information that we’re collecting for patients.

We didn’t have the idea of a data steward position in healthcare, but it will evolve as we enter the post-COVID era. We didn’t have a great handle on why and what was happening. The job of a data steward is to periodically have software that tells them “this data doesn’t look right,” so that we are constantly curating and improving the patient data, ideally involving the patient in that process, so we can have more confidence in that data.

I don’t know if people will say this out loud, but we don’t have a huge amount of confidence in our data,  in part because of all that uncertainty. Most people, whether they realize it deliberately or whether it’s just kind of this itch in the back of their brain, wonder if this data is good. Having a data steward function and having mechanisms that are constantly measuring and monitoring the quality of that data can dramatically improve our ability to have data that we can rely on to make better decisions.

Do you have any final thoughts?

This last year has shined a light on how important information is in what we do in healthcare. It’s not more important than taking care of patients, but we can create high-quality, actionable data as a by-product of taking care of patients. We can feed a cycle that allows the software to do a better job of helping providers, public health experts, and researchers be more effective and yield better results. I’m optimistic that we are on a trajectory to get to that place.

HIStalk Featured Sponsors


Founding Sponsors


Platinum Sponsors
























































Gold Sponsors












Reader Comments

  • Vikas Chowdhry: Will AI (I prefer the term Machine Learning - ML) magically fix all the incentives that have been created in the US heal...
  • Brian Too: I dunno. Seeking an answer in AI for America's healthcare woes seems a little desperate. LIke, adding one magic new in...
  • meltoots: You forgot that politics will have to be programmed in your AI. On both sides of the aisle, misinformation, doing things...
  • IANAL: So what should Cerner do though? They have some market issues because the largest potential or current customers have at...
  • Ghost of Andromeda: That's exactly the opportunity here! Cerner is struggling and could really use someone who knows how to execute with dis...

Sponsor Quick Links