HIStalk Interviews Thomas Thatapudi, CIO, AGS Health
Thomas Thatapudi, MBA is CIO of AGS Health.
Tell me about yourself and the company.
We are primarily a revenue cycle management company. We work with pretty large enterprises, such as Mayo Clinic, Cleveland Clinic, and Baylor Scott & White Health. We offer services on the front end, which is usually scheduling, patient access-related functions, mid-cycle coding, and in the back end, AR and denials. We are about 15,000 people. We are what I would call a tech-first services company.
My career over the past 20-odd years has been primarily in technology. I’ve been focused on building data-intensive apps. In the last two or three years, I’ve been pretty intrigued with AI and its applications.
What are the biggest pain points in RCM that technology may help solve?
I’ve been working on the provider side only since I’ve started working with AGS Health in the last four years, so I’ve seen a fair bit of insurance. When I say insurance, that’s auto, home, and health insurance. I’ve worked a lot with payers, I’ve worked with credit card companies.
My take on providers is that healthcare has always been a laggard in terms of adoption of technology, and more recently, AI. Providers, even more. Even within providers, revenue cycle management is probably at the bottom of the totem pole when it comes to infusion of either the technology or capital that is required for technology.
RCM is primarily a labor-intensive enterprise. Because there are no unlimited resources for providers, it means that we need another toolset, or at least part of the toolset has to be technology or AI, to address some of the issues.
For example, the denial rates have only been going up in the last two to four years. The payers are making denials more complex. There is no way that providers can throw unlimited resources at it, and neither can the RCM providers like AGS Health. Therefore, each and every portion of the RCM life cycle, from when the patient has completed his or her interaction at the point of care to when the interaction is closed, whether it’s collected, denied, partially collected, or whatever. Through that whole function, it is important that there is some tech infusion happening, or else some of these things will fall through the cracks because there are only a limited number of humans that you can throw at some of these problems.
Are providers thinking about technology and AI for immediate cost reduction or revenue enhancement, are they looking at it strategically, or both?
I see a combination of both, at least in the last 12 months that I have been talking to customers. Last week I was with a chief revenue cycle officer who was progressive and wanted to get ahead of the curve in terms of adoption of AI. The reason is that the CFO comes back and says, can you squeeze more dollars from this? Can you do this? Instead of spending 7 cents collecting a dollar, can you do this using 4.5 cents? The bottom line always has to be, can I collect the dollars faster and more economically?
Others don’t want to miss the AI boom, so they make all the right noises, but actually don’t know how to wrangle with AI. You see both ends of the spectrum here.
How do RCM and consumerism intersect from a technology standpoint?
I’ll take something very simple. A patient needs to get a scan and the prior auth has been denied. Therefore, all it requires is informing the patient that his or her medical procedure has been denied and they need to go back to the clinician for an alternative clinical pathway. The question is, how exactly do you reach the patient to be able to inform them?
One of our customers has 50-odd people sitting in some town in Wisconsin making these calls. But half the time, nobody’s picking up those calls, because they don’t recognize the number. You cannot even inform them that their procedure has been denied. If you leave them a voicemail or a message, it almost always triggers a call back into the contact center saying, “You left me a message. I have no clue. What am I supposed to do?”
These are patients who most probably have been waiting for that particular procedure for a long time. How do you actually reach out to the patient and make sure that their whole interaction with the healthcare system — getting the procedure done, making sure that they know how much they’re paying, making sure that their schedule is on time, and getting the right approvals from the payers — how do you make that interaction more seamless without making it burdensome? It’s a gnarly problem even now.
With mobile applications since 2010 and people being on social media and attuned to how they work on social media, we would have assumed that by 2025, some of these problems would have been more elegantly solved, but that doesn’t seem to be the case. This is an ongoing problem, so there’s a lot more opportunities than what it might seem.
How will healthcare use agentic AI? Is it too early to ask people if they are seeing results?
There’s been a lot of buzz about agentic AI, especially because of OpenAI and others. The VC-funded firms have been hyping up that word quite a bit. My own hypothesis is that it won’t solve world hunger, where all the humans disappear and there are just AI agents doing everything. But it also doesn’t mean that the world will remain what it is. There will be some changes on that front.
With payers, when there is pressure in terms of claim loss and medical loss ratios going up, the first thing that they always go after is the provider contact center. One of the largest payers that I worked for had about 12,000 people in the contact center, with 7,000 of them addressing members and 5,000 working in the provider contact center. If the claim loss ratios are going up, the first thing that the CFO does is cut the number of people handling the provider contact center because as you can imagine, they’re not dying to answer questions about, where is my bill or is my prior auth approved?
As I’ve talked to CTOs and CIOs on the payer side, they would like to deploy agentic AI to answer some of these provider questions. If it’s not already there, we should expect in the next 12 to 24 months that the payers will start fielding some of these agentic AI to answer questions either, if not to the members, at least to the provider community.
My own interaction with AI agents has been interesting. I suffered a home claim loss. I had to call on a Saturday because that’s when it happened. The insurance carrier was shut down, so they had a TPA taking that first notice of loss. It was an unpleasant interaction. It was almost like the lady was like, “How dare you have a claim loss on a Saturday?” I got the claim number, so the first thing that I did on Monday morning was to call them back to make sure that it was logged correctly.
For the first six or seven minutes, it was a very pleasant interaction. The other person was empathetic, saying all the right words, making sure that we were doing well, blah, blah. It took me a good eight or nine minutes to figure out that I was talking to an AI agent. Lo and behold, it was a good interaction. I got my details. I knew who I had to call as my next steps. I knew what to expect.
My assumption is that as the AI agents cross the uncanny valley of completely being unrecognizable as AI agents, patients and even payer contact centers might actually be comfortable talking to these AI agents. Going back to my example of calling up the patient to tell them that their prior auth has been denied, and they need to go back to the clinician. In my mind, there is no reason to do this using a human. We are piloting an AI agent to make these calls as we speak.
We will start scratching the surface in terms of how many of these interactions can be done by AI agents versus humans. It’s a matter of time when it will happen, not whether it will happen.
Will companies treat AI agents as a feature, not something to hide, because many people would rather not talk to an actual human?
I was reading an article that the Gen Z’ers apparently don’t like calling at all. If they know they aren’t calling a human, they will be more open to calling.
I was at AWS last year and the CTO of Rocket Mortgage was presenting. He made an interesting observation that their mortgage conversion ratios are 3x when person who might take a loan talks to an AI agent rather than a human. There’s more empathy and understanding.
It will be an interesting phenomenon. My own assumption is that we as humans will most probably get attuned to it. When we are booking travel or ordering food on Uber Eats, many of our interactions will most probably be with AI agents. These AI agents in healthcare may not be such a curveball to patients or members. They might actually welcome it versus talking to a human.
How do you program AI to use the human knowledge, judgment, and intuition that a good employee develops and then teach it to apply it in a human-like fashion?
I simply don’t believe that all the human interactions will disappear and it will all be AI. Work will get delivered as a combination of humans and AI. Sometimes AI work being audited by humans and vice versa. Humans and AI are constantly interacting with each other in a seamless workflow. They are correcting each other, learning from each other, and auditing each other. They are passing work back and forth seamlessly.
We’re building a denial workflow as we speak. Right now the way that we do it is brute force. The denial reason that is being presented back to the payer, we’re going to use AI to present the denial letter back. We’re going to use AI to do the doc prep, which is supporting that denial letter. Then it goes to the doctor in Mexico, who says, I disagree with it , or I agree with it, and this is how I would audit it or edit it. Now that is being sent to the payer, but also being presented back to the AI.
They are learning from each other. The human could learn from AI, oops, I didn’t think that this was like a credible reason or I didn’t think of this combination of CPT and ICD code. That’s a really good reason. AI and humans will constantly reinforce each other, learn from each other, and in my mind, work will get delivered as a combination of humans and AI.
If you think about autonomous coding and radiology, it could very well be that AI becomes 85 or 90%. But if it’s a complex denial more than $100,000, the AI could be just 20% or 25%. The ratio could differ, but it will always get delivered as a combination of human plus AI.
How do companies decide when to make a big AI bet, and if they are wrong, are switching costs so low that they will just take a different direction?
One of our customers told me that they need a full-time person to just monitor all the AI inquiries or propositions that they’re getting from startups. Everybody’s trying to solve for everything.
AGS Health was acquired by Blackstone just a couple of weeks ago. The whole investment hypothesis was, what do you think the scope is for AI? The way that I am approaching it within AGS Health is that we’re taking some very clear cut bets between four to five product lines. I’m looking at denial management. I’m looking at contact centers being up for disruption. I’m looking at how we can do more denials through AI and obviously autonomous coding.
The question is, can we limit ourselves, fence ourselves, to four to five product lines, or four to five problem statements, and double down and triple down on them and make sure that we are working through them? It’s easy to look at 20 different problems. Each of them looks amenable to AI. The burnout ratio could be high if you end up chasing 20 of them.
The way that I’ve presented to Blackstone is that I’m picking five bets. Be ready for the fact that only three may work out and two may fail. But when the three work out, we will take a larger than reasonable market share. Therefore, we will be well off in the future.
It’s a little bit of change management, whether it’s to the customers or to my own investors, to tell them not to assume that every AI bet will pass the test and be ready for a 30 to 50% failure rate. But let’s take limited bets and see which ones pay off.
How will technology fit into the company’s strategy over the next few years?
The way that I always think about it, and the way that I talk to my own product and technology teams, is that it doesn’t actually matter how fancy the tech is. It could be the fanciest mousetrap in the world, but if it doesn’t solve the customer’s problem … can I collect the dollars faster and much more economically? Can I keep up with the denial claims ratios? Can I keep up with all the regulatory issues? Can I keep up with the payer whims and fancies? If I don’t solve for any of those, then it doesn’t actually matter.
Let’s take autonomous coding as an example. Whatever tech I put in place, if I cannot beat the offshore coder rate, then it doesn’t matter. Am I solving the customer’s problems and am I solving them at an economical rate? If I have those two questions answered every time I build a mousetrap — whether it’s tech, AI, or a combination of tech, AI, and humans — then we have a winner on our hands.

I dont think anything will change until Dr Jayne and others take my approach of naming names, including how much…