HIStalk Interviews Micky Tripathi, National Coordinator for Health IT (Part 2)
Micky Tripathi, PhD, MPP is National Coordinator for Health Information Technology.
FTC recently warned companies and developers about using AI algorithms that are biased, intentionally or not. What government involvement do you expect, if any?
We actually just had a discussion about this yesterday within ONC, starting to talk about that, among a set of issues that are related to health equity. That is certainly a part of it.
I don’t have a great answer right now. We are just at the beginning of it. We are just starting to start to think about what the issues are and what federal agencies have involvement in this. You named a couple in FDA and FTC. I’m sure there are others who aren’t necessarily involved from a regulatory perspective, but could be involved from a use perspective. If you think about CMS using algorithms, VA, DoD, IHS, I mean it certainly could be all over the place with different federal agencies that are involved in healthcare in one way, shape, or form.
Next is the question of, how do we think about bias? There is certainly a piece that is related to help disparities for minoritized, marginalized, underserved communities. That’s a huge piece, one of the things that I was addressing. There are also more general questions of bias. If you think about bias from a statistician’s perspective, it is anything that would bias an inference that one is making using a set of tools. You can imagine, for example, general questions about algorithms that are trained within certain environments. What applicability do they have to other environments, and what inherent biases are involved in that? How do we measure those or parametrize the learning foundation that a set of algorithms was developed on, and how applicable are they in other circumstances? How do you set some parameters around that to give some assurance that you are addressing as many of those sources of bias that are possible, recognizing that there could be a whole bunch of other ones that are harder to detect?
For example, if we all wanted to move to a world of quality measurement that relies less on structured data elements – which impose a certain burden on providers and provider organizations to standardize that data and to supply that data – and move to a world where that can be complimented by, and perhaps eventually substituted by, a more algorithmic-based approach with more computable types of approaches applied to with natural language processing and other kinds of things, that raises the question of, if the algorithm has been trained to do certain types of detections — let’s say for safety, or is trained to do performance measurement in certain ways – in an environment like the Mayo Clinic or a large set of academic medical centers, is that applicable in other hospital settings? How would one know that it is applicable in some ways? If you are going to start paying people based on the results of that, we are going to have to develop a set of answers to those kinds of questions.
What is ONC’s role in reducing clinician EHR burden?
We have a clinical team that is working closely with CMS on clinician burden. We co-wrote a report that was released at the end of last year. We spend a good amount of time thinking about that with respect to everything that we do, especially as we hear about all of the concerns that people have about health information technology and burdens that have been imposed.
Part of the adoption trajectory is that no technologies are perfect, and the only way to make technologies better is for users to use them. Anything that is designed purely by a set of software engineers without having a good base of users banging away at it and providing that ongoing feedback is not really a reality when you think about the systems that we think of as being the most highly usable. All of those are improved, sometimes dramatically, with the input and the feedback they get from thousands and millions of users. That is true in health IT as well.
So part of that is growing pains, and part of that is things that are imposed on the technologies from the outside. The EHR gets blamed for things that it’s really just the vehicle for, like prior authorization requirements and more documentation requirements. There’s a sense that it’s easy because it’s in the system and is automated, so I have more of it required now than I did in a paper-based world. Users sometimes blame those things on the EHR, when in fact they are being imposed through that vehicle and then pushed through that vehicle separate from the question of the burden imposed by the technology itself.
At the end of the day, it doesn’t matter what the source is. That’s why we spend a fair amount of time worrying about both the technology and usability as well. What is it that we are asking to be forced through that system and are asking users to be able to do?
What will ONC’s priorities be over the next two or three years?
One is certainly coming out of the pandemic and helping the CDC and other federal partner organizations. Working a lot with the CDC on establishing the public health infrastructure of the future and how we think about that as more of a public health ecosystem. Thinking about EHR systems as being sources of information, with a variety of other sources of information, that can be brought together on demand in a more dynamic internet sort of way to be able to respond to crises as part of an ecosystem rather than being siloed systems. That’s a lot of work.
There’s a lot of investment into these systems going on right now because of the pandemic, working hard to say, how can those address the current need as well as the investments toward what the future needs are going to be? We have under-invested in public health infrastructure for too long, which is partly why we are where we are, so that will certainly be a focus area.
Now that the applicability date for information blocking has passed, working with industry to iron out the wrinkles. Compliance is obviously hugely important and there are penalties and real rules, but I really want and hope and expect that we are going to be able to move beyond that to say, I’m not doing it because I have to do it — which means that people will meet the letter of it and perhaps not go further — but I’m doing it because there’s an opportunity here, a new paradigm for the way we think about healthcare. There’s a new paradigm for the way we think about engaging patients. There’s a new paradigm for the opportunities that sharing information presents back to me. Yes, I have to make more information available, but that also means that other organizations have to make more information available to me. I have the opportunity to be able to demand that more of that information be made available to me than I did in the past, and I should be thinking about that.
There are a lot of wrinkles that we have to iron out for sure. We are trying to do that with FAQs, and with something as complicated as healthcare, you put out a regulation and a million questions start coming, all of them legitimate. There’s that twist on it, and, oh, here’s a circumstance that we didn’t think thoroughly about and now we have to give an interpretation of that. There’s certainly a whole bunch of that that we need to get past, and that’s all understandable. But I want to be able to help the industry get to that next level as quickly as possible.
We are paying a lot of attention to structured data right now, which is the USCDI, the United States Core Data for Interoperability, and those elements that are required to be made available for the first 18 months through APIs. But we should also not lose sight of where the puck is headed here, and that is toward that more general construct of EHI, which is electronic health information. That is the electronic representation of the designated record set, which is in theory — I’m putting air quotes around this – “all of the patient’s data.”
We know that all is a very slippery term because there’s a lot of information contained in a hospital system, especially for a complex patient. Defining “all” could be very tricky and may not be what someone wants. But going back to the earlier part of our conversation when we were talking about algorithms, when you start to think about all of that information being made available now, it’s the information beyond what is structured. The idea is that we shouldn’t be waiting for data to be standardized and structured before we say that it should be generally available, in part because if that is rate-limiting, it’s going to take us a long time to get there.
The standards work slowly and methodically. That is saying that that information just needs to be made available in whatever form it exists, then let the users figure out what they’re going to do with it. But the obligation to make it available is preeminent. That speaks to algorithms and what we’re going to be able to do with that data. Who is going to be ahead in making sense of that data once it’s available and being able to do high-value things with that information?
I’ve been trying to talk to as many people as I can about remembering that is coming. How are you going to position yourself for that? What are the tools that you are going to bring to bear? How do we start to develop those tools and those capabilities to be able to take advantage of that?
Equity is a huge priority. Thinking about that from a design perspective, meaning all the way down at the core, so that disparities are not an afterthought or a hope for output of the system, but something that is baked more into the fundamentals of the way data is collected and the way data is aggregated and analyzed. Some of that relates to the bias questions that we were talking about before, and ultimately, what actions we want that information to be able to inform. Because there’s no data collection for the sake of data collection — data collection has got to be geared toward a specific set of decisions that you’re going to make and a specific set of actions that you want to take one way or the other. We haven’t had enough of that. We need to think about health equity and the data that we want to be able to get to help inform health equity.
The last thing is interoperability as it relates to networks. TEFCA — the Trusted Exchange Framework and Common Agreement — is a really important part of thinking about that as we enable these networks to finally be able to rationalize interoperability across the network, so that as a user, that is all deprecated into the background. When I’m on my AT&T phone, I don’t think for one second about how it magically connects me to a Verizon user or an Orange user in Europe. But right now, unfortunately, providers do have to think about that. I’m hoping that we can get TEFCA to a place where it pushes all of that to the background so that we no longer need to think about that, and we have interoperability for users that just happens in the background and no one needs to worry about the engineering piece on the front end.
I think you're referring to this: https://www.wired.com/2015/03/how-technology-led-a-hospital-to-give-a-patient-38-times-his-dosage/ It's a fascinating example of the swiss cheese effect, and should be required…