Don Woodlock is head of global healthcare solutions at InterSystems.
Tell me about yourself and the company.
I’ve been in our industry for my whole career – 14 years at IDX, 15 years at GE Healthcare, and eight years here at InterSystems, where I run our healthcare solutions. These are the applications that we sell to the healthcare industry, which includes an EMR that we sell outside the US and an interoperability data product line called HealthShare that we sell around the world.
Otherwise, InterSystems is also a data platform company. Most famously, Epic is built on our technology, but about 1,000 other systems have been built on our technology as well. We’ve been in the industry a long time, specializing in data and interoperability, with special skills in healthcare.
How central is healthcare to the overall mission of the company, which has $1 billion in annual revenue and 2,000 employees?
We are primarily in healthcare, where we have a deep focus and experience base. It has been our longest business. Our technology is applicable to a few other industries, so we have built up financial services and supply chain product lines and teams, but our heart remains in healthcare.
We have been foundational, so everybody doesn’t necessarily know our name. We are behind other vendors as the interoperability or data platform for many customers. We’re not always out there with the front-end face of healthcare software, but we’re certainly in there on the heavy duty lifting, performance, and scalability of the healthcare data side.
What are the latest developments in the company’s technology?
We are not unusual in that we’ve been focused a lot on how Gen AI can make a big difference to our products and to our customers’ workflow. Each of our products has had exciting innovations, with new features and modules that are enabled by Gen AI. We are certainly in that AI era, a good AI era, and we’re a couple of years into it. We have a lot more years of innovation and hopefully making healthcare a lot better with this technology.
What parts of healthcare do you expect to see profoundly affected by AI, especially in quality, cost, and access?
At least the first few years, we’re focused on the user experience and having the use of technology like ours, like EMR, be a whole lot more fun, delightful, natural, and more human to use. We moved to graphical user interfaces 20 or 25 years ago. We thought that was a big shift, but it’s still hard to use. It’s still a lot of pointing and clicking, dropdowns, and tabs. It’s not a very natural experience.
With GenAI and approaches like ambient or natural chat user experiences, we will be able to create software that’s a whole lot easier to use, to get information out of, and have it pay attention to our instructions and do useful things for us. Historically, software has been kind of dumb. It follows the instructions of the user, stores the information that I type in, and then shows it back to me a few days later when I ask for it again. AI can allow us to build a lot smarter software that will be more helpful to us as users and hopefully will help transform the industry.
We first focus on the user experience. Down the road, we’ll start to move into other areas around clinical decision-making, workflow optimization, and a better patient experience. There’s a lot of places we could go with this technology.
You just announced IntelliCare, a next-generation, AI-centered EHR that is available only outside the US. What were the lessons learned in developing it?
We took a bet that worked out, which is that AI should be natively built in the EHR versus just a partnership with somebody else. That’s really working out. It’s enabling us to do closer integration of the technology into the workflows of the user instead of having it be an arm’s length relationship. That’s been good.
It takes more R&D to do that. We’ve had seven teams working on this across our EHR development teams. With enterprise EHRs versus best-of-breed departmental systems, enterprise has won out as the right strategy. I don’t think that AI is that different. You want to embed it into that enterprise feel versus having it be a best-of-breed type system. We made that decision early on that we would do this natively. That cost us more, but I think it’s going to pay off, and it is already paying off with some of our early adopter sites.
Other lessons learned with AI is that it’s important to work closely with our customers. There’s a lot of trust issues with AI. There’s a lot of education issues in terms of how these systems work, how we test and validate them, and how we get comfortable with the way that our data is handled by a cloud AI provider.
There’s a lot of new ground on the InterSystems side, but also on the customer side in terms of governance, legal, safety, and a comfort level with AI overall. We’ve had to spend more time than I would have guessed on the customer side, educating them and getting them comfortable with what we’re doing. Maybe part of my education push on AI was observing how much the market needed to learn about AI in order to adopt it well. We’ve just encountered a lot of that with our early sites.
How does traditional software development, maintenance, and support change when you add an AI component?
The good news is that all of these large language model vendors basically use the same APIs and the same way to call them. There’s not a lot of technical investment that you’re making in one road that’s not useful for another road if people continue to leapfrog each other and things change.
What is really tricky is the testing and validation process, because when you are dealing with generative AI and you ask the same question multiple times, you’re not going to get the same answer back. There’s a non-deterministic aspect to the way large language models work, even on the inbound side. If somebody’s asking a question about a patient chart or whether they have been seen for this condition before, there might be multiple ways that that clinician might ask essentially the same question. There’s non-deterministic aspects on the user side and then certainly non-deterministic aspects on the answer side.
We had to invent our automated testing process and our validation process from scratch. That is much different than our traditional process, where we want them to fill out these four dropdowns and get the answer “32” in the end. For this non-deterministic process, we’ve had to build up a completely different automated testing infrastructure and validation infrastructure. We have a lot more human validation with real physicians and nurses in the process. Testing, measuring accuracy, and then maintaining that accuracy as the model providers come up with new versions is a whole different design and architecture that we needed to build around this.
How are you using AI tools personally?
We provide our employees with OpenAI licenses with an enterprise agreement, where they can use it for company confidential stuff. We’re enabling our employees, myself included, to use and take advantage of the technology.
For me, I use it most for coding side projects. I do a lot of AI side projects just to keep current with the technology. These large language models are excellent at writing code, answering technology questions, debugging, and stuff like that. It’s remarkable how well these technologies work as maybe junior programmers or code developers along with you.
One way to view these AI technologies, at least for the next couple of years, is for empowering every human employee here with a co-marketer, co-developer, or a co-implementation person who can help them be better at their job, be more productive, debug problems faster, and that kind of thing.
The industry could use basic AI education to navigate the opportunities and risks with AI effectively. I’ve always enjoyed teaching, so I am doing five or 10-minute videos called “Code to Care” to explain AI concepts. I always have enough content because buzzwords are being thrown out that people don’t understand or that vendors overuse. I am enjoying putting together that AI education. It’s important. HIMSS, HLTH, and ViVE have a lot of sessions where educators don’t get into enough depth, or maybe they don’t know enough depth, to help you understand some of the newer topics and approaches.
I don’t know if it’s to the company’s benefit or not, but I certainly enjoy doing it. I enjoy hearing from people across the industry who have known me over the years who like my video content. It’s important that we navigate this AI wave effectively.
What has been the impact of moving to the cloud?
I’m finding that our customers are struggling with anything on-prem these days. Maintaining a data center and keeping hardware and storage current, updated, and patched for security vulnerabilities is a growing challenge. More and more of our existing customers are asking us to host their platforms or offer the same functionality as some kind of service or equivalent.
For our net new business, we almost do everything as a service. People within health systems and payers don’t want to be doing this anymore. It just doesn’t make sense economically. It’s the predominant model that we find to make software and technology available to customers. We do the heavy lifting, such as maintaining the staff, buying hardware if we’re doing it ourselves, or procuring it from one of our cloud partners. The industry is just kind of done with on-prem software and relying on their software vendors to manage it as a hosted or software-as-a-service platform.
Is interoperability a solved problem?
[Laughs]. No, no, no, no. I definitely think that the ball has moved, which is great. When I started in interoperability, the use case was a provider seeing a patient, let’s say in the ED, and wanting to know what happened with this patient outside of my health system. That is getting solved. National networks like CommonWell or vendor networks like Epic’s Care Everywhere have done a fabulous job with that use case, and the ball has moved.
But we’re trying to do new things. We are working hard on the payer-provider interaction, like electronic prior authorization, clinical data exchange, payer data exchange, and patient and member access to their information. Those are new exciting use cases that we’re working on as an industry.
The industry still struggles. We are in the middle of this with our technology and services with mapping data in one format and making it consumable and useful in another format. So it’s definitely not a solved problem. We are enjoying a great growth of FHIR as an approach and a set of standards, and that is helping with all of these new use cases.
Things are getting better. We’re moving on to slightly more advanced problems from an interoperability point of view, but it’s certainly not a solved problem at all.
What near-term trends will influence the industry and the company?
InterSystems has been around for 47 years. We have a slide that we talk about, which is the advent of micro-computing, PCs, the Internet, cloud computing, and now Gen AI. Each of these is maybe a 10-year-long transformation that has allowed us to do great new things. All of those significantly advance the impact that computers and software have had in healthcare. Gen AI is going to be either no different, or even better, than some of those prior transformations. That’s a terrific trend.
I also think that cooperation among payers, providers, public health, Medicare plans, and others within a community is getting stronger. It will make it easier as a patient and as a caregiver for your family to navigate the healthcare system. I hope that technology, interoperability and cooperation across communities will continue to improve. I certainly see it improving with customers that we work with.
I don't have any idea of what Epic's budget is for dealing with patent trolls, and to what extent they…