Michael Abramoff, MD, PhD is president, founder, and director of IDx of Coralville, IA and professor of ophthalmology, electrical engineering, computer engineering, and biomedical engineering at University of Iowa Hospitals and Clinics.
Tell me about yourself and the company.
I’m an ophthalmologist specializing in retinal surgery. I also studied computer science, did a master’s, and then did a PhD in image analysis years ago. I worked for years in France in the software industry. I worked on neural networks 30 years ago. I’ve been trying to combine IT and medicine for the last 25 years. People have always said it’s a great combination, but it turns out that it’s pretty hard to do. Right now, I’m excited because we are very successful and it’s going somewhere.
The company was founded in 2010. I had been working on algorithms to diagnose disease before then. As you can hear from my accent, I came from Amsterdam in the Netherlands to Iowa now 15 years ago. I had been doing research on these AI algorithms and was getting good results. By the time I founded IDx, I realized that productivity and loss of productivity in healthcare is key if we want to do something about the cost of healthcare.
If you want to make physicians more productive, AI needs to be autonomous, meaning it makes a clinical decision by itself or a therapeutic decision by itself rather than assisting a clinician, because then you don’t really do something about physician productivity. That’s the key.
Since then, we have been working on a number of products, but primarily on diabetic retinopathy, mostly because it’s the most important cause of blindness. It’s very obvious. We know exactly what to do with these patients if we catch them early. But they are not caught early. The patients are in primary care, but historically they needed to be referred to an ophthalmologist like me, an optometrist, or a retinal specialist to examine the retina for signs of disease. Then you can still prevent vision loss and blindness. But that’s not happening.
It’s the lower-hanging fruit in terms of using a well-defined task in analyzing these images and a well-defined task in terms of what happens to the patients. What the diagnosis should be and where it should happen. You take the diagnostic capability that is in me, as a retinal specialist, into primary care, where I’m clearly not. That’s what we set out to do with the clinical trial of the product.
It took seven years of conversations with the FDA to make sure they’re comfortable about how to validate autonomous AI, which makes a clinical decision without physician oversight. Make sure it’s safe — that’s primary. Make sure it’s efficient. That’s what we did with the clinical trial that led to approval last month.
Who pays for your product and who bills for the testing?
It’s moving a specialist’s high-quality diagnosis into primary care, so primary care is billing for it and we get a part of that.
Many companies are suddenly proclaiming that their product uses AI. How would you evaluate their claims?
Artificial intelligence is the frontier of what we do with computer algorithms. Even databases and SQL were called AI 30 years ago. That term is shifting. Right now, it means analyzing clinical data to help make a decision or to actually make a decision.
Instead of saying AI, I’d rather say “autonomous AI.” You have something called “assistive AI,” which is using computer algorithms to assist the physician or specialist who is making a clinical decision or therapeutic decision, or even helping them do surgery. Autonomous AI makes the decision instead of the physician doing it.
It’s a more interesting distinction to say autonomous versus assistive rather than saying, “This is AI and this not,” because that’s a very much a gray zone right now. Like I said, historically, many things have been called AI that no one in their right mind would call AI as of today. I bet you that things like we’re doing, five years or 10 years from now, people will say, “That’s not AI. That’s not the leading edge.” Whatever we’re doing then, we’re thinking about therapeutic applications. They’ll be the leading edge and that will be called AI then.
But the autonomous versus assistive distinction is very important. You see the same with self-driving cars. It’s assistive, meaning it parks for you and it has lane protection. But it doesn’t drive for you. That’s an autonomous car. Similarly, there’s a difference between autonomous in AI and diagnostics in healthcare.
You have pipeline projects for analyzing blood vessels to predict MI, stroke, and other cardiovascular issues. How could that change healthcare?
First, about that pipeline. We have a number of products right now. We’re most prepared for a glaucoma early detection product that will probably go into clinical trials later this year. Like you said, there’s a number of other products, including some outside of the eye, like for the skin or the ear. We’re working on “the AV product,” as we call it, which relates to analysis of the arteries and the veins in the retina. It essentially tells you how the arteries and veins in the brain look. The retina is part of the brain. It’s just easier to look at it than to get a scan or angiography of the brain. It tells you about the micro-circulation in the brain.
We know from many studies done by many other groups — including my group as a research project — that it tells you about the risk of getting a stroke or other cardiovascular events. It is not a certainty. It is not a diagnosis. It just tells you about the risk. We see this product as a risk analysis, like when the patient comes into primary care and blood pressure is measured. That’s just the risk factor. High blood pressure is a risk factor and so is abnormal retinal arteries and veins. It tells the provider that there’s something really wrong with the vessels in the eye and therefore in the brain, and therefore this patient should be analyzed further.
That is how we see that product developing. But right now, it’s not a product. We’re not ready to put it into the clinical trial, like glaucoma and some other products that we’re very near to, hopefully, getting FDA approval soon.
Google is doing similar work in analyzing the eye to detect broad risk factors. Are many groups using AI in this way?
Google did very good research that other groups, including my group, have been doing for years. Looking at retinal images and seeing what associations with other diseases you can find. They’re able to do it on a large scale.
It’s very exciting, but I want to stress that scientific research involves looking for associations that we didn’t know existed. The big step is going from having an interesting association — between something I can measure and something that is happening to the patient — to actually making a diagnostic or therapeutic decision from that. It’s a very different environment. It needs to be safe. You need to be absolutely sure you can explain how it works and why it works. The FDA has big say in that. So you move there from scientific projects, which is really exciting. I’m a physician-scientist myself with a big research group to make a product out of it and put it through a clinical trial.
What is the potential of using AI in the overall spectrum of image analysis and how might it fit into the workflow of a physician?
I’m an immigrant, so I can say that the US healthcare is in many cases the best in the world. But it’s extremely expensive. The challenge is making it more affordable.
That’s why I think that autonomous AI is so very, very important. With assistive AI, you can make a physician better, a specialist better. That’s not always the case. You need very good studies to figure out whether it’s true. But at least you have the potential to make it better. But it’s at least as important to also make it more affordable. Then you go into autonomous AI. For the near future, at least, definitely in terms of more applications of autonomous AI.
There are many things right now that AI cannot do and should not be doing. That may change in the future. With an IT background, you know that the more well- defined the requirements are, the easier it is to automate. The more ill-defined and vaguely defined it is, the harder to automate. But there’s many things that we have protocols for, very good standards for, and physicians know pretty well why they’re doing what they’re doing. There’s a lot of research at the basis of that. Those are the fields where you’ll first see additional autonomous AI. Both in the retina and other organ systems, you will see the use of autonomous AI for therapeutic decisions.
For robotic surgery, many groups and companies are doing assistive AI surgery, but autonomous surgery is a little bit farther away. You’ll see this incremental autonomous AI developing. Just like with self driving cars – you’ll see the steps being made now that may lead to, sooner or later, self-driving cars.
It’s so crucial that autonomous AI is happening. There is a role for assistive AI to assist clinicians like me to make better diagnoses, but I see the field going to autonomous AI. I also see also the biggest return on investment going there.
Are you getting lot of interest from investors, potential acquirers, or partners since you’ve had just one funding round from several years ago?
It’s so much we can hardly keep up. From big names to smaller funds, growth equity funds, VCs, investment banks. Big names that you would recognize. I don’t want to disclose here. We’re looking at doing a round this year or we have been thinking and talking about an initial public offering. We are prepared for that. The question is, when is the timing right? We’re still mulling it over and seeing when it would happen exactly. But definitely there’s several opportunities for investment in the near future.
Where do you see the company going in the next several years?
The main thing now is rollout. Getting this into every primary care clinic and every retail clinic in the country is what we focusing on right now. We have this product. We have this FDA approval. Now we need to show that it actually benefits patients. We need to reach the maximum number of patients. That’s why I did this. I want to make it better for people with diabetes. That’s what we’re finally able to do now, because FDA said, this is safe. This is a responsible use of AI. Let’s do it.
Once you are in the primary care clinics, it’s relatively easy — I’m not saying it’s really easy, but relatively easy — to have a different AI product to put on top of there. It’s attractive, once you have that imaging platform, to build additional diagnostics on top of it, without any additional effort for either the clinic or the patient. That’s what you will see coming out of us in the next years. Mostly presence everywhere and additional products. First in the eye, like glaucoma, and then later also in other organ systems.
It’s going to be very exciting time for the next few years. We’re the first. We intend to stay ahead. There’s big, very big names following us. That’s exciting and daunting. But we are very good team and very good company. I think we’ll be successful.