Unfortunately, I can't disagree with anything you wrote. It is important that they get this right for so many reasons,…
Navigating the Early Days of Healthcare AI Integration
By Michael Burke
Michael Burke, MBA is founder and CEO of Copient Health of Atlanta, GA.
Have you tried using any of the AI tools that have taken the world by storm recently? This article will probably be more helpful if you have some knowledge or experience with ChatGPT, Google Bard, Anthropic Claude, or any other LLM/chat model tool.
If you haven’t already, try asking one of these tools a specific question or give it an assignment to produce a specific document and see where it leads. You may be surprised at just how useful the results can be.
If you’ve used these tools to answer questions or generate content (e.g., a legal document, a policy document, an email, or an article like this one), you have some sense of their potential. Imagine what could be done with a tool that leveraged an LLM like ChatGPT on your hospital’s data. The software vendors you use are all either investigating or actively releasing tools powered by LLMs to leverage your data. At Copient Health, we are, too.
It’s my belief that these tools will fundamentally change the way you interact with those vendor systems and ultimately, in both the way that you do your work and the results that you get.
A comprehensive list of all use cases is impossible because we’re so early in the process, but here are a few obvious low-hanging fruit uses that are relevant for software vendors:
- LLMs are already powering chart notes that are built in real time from patient conversations.
- Dashboards and reports will become unnecessary, because you will always have the specific data or chart that you need just a query away. The LLM can even proactively push the appropriate information in the appropriate format for the appropriate context.
- You can forget about manuals, indexed help systems, or frustrating first-generation chat bots that perform poorly. LLM-powered solutions are better at finding what you’re looking for using a similarity search of a vector database.
- You might even abandon memorizing complex commands or menu hierarchies and ask the LLM to accomplish the task instead.
But ChatGPT and other public-facing LLMS were trained on public data. How can they be leveraged for use cases that require knowledge of private data?
The answer to that question used to take a lot of time, money, and a team of data scientists to train your own LLM, or at least fine tune an existing open source model. That has changed dramatically, mostly in the last 6-8 months, based primarily on a term that you may have heard: “prompt engineering,” and one that you probably haven’t: “in-context learning.” Here’s a quick summary:
LLM models are text-in, text-out black boxes. But the text-in doesn’t have to be limited to a simple question. It can include prompts of background information, examples of questions and answers to similar scenarios, chunks of data, or simply directing the LLM to “think step-by-step.”
These are all basic forms of prompt engineering. The LLM temporarily “learns” from this prompt data, at least enough for your current conversation. LLMs can be used as an inwardly-directed service to decide what data or tool to use based on the prompts that it receives. This design pattern has demonstrated better results than the more cumbersome fine-tuning approach for the smaller data sets that we’re talking about.
An entire ecosystem of software tools has emerged to support the use of these pre-trained LLMs on private data. These tools convert the challenge from what was once an arcane AI data science problem to a data engineering problem, primarily built around prompt engineering and in-context learning.
Here’s an illustration of how quickly these tools have evolved and been adopted. One of the most widely used tools in the ecosystem, LangChain, was first introduced in October last year as an open source project from two college students. In a few months, its use expanded globally. The founders incorporated and raised $20 million in venture funding from Sequoia Capital. Since last October, they have garnered 60,000 GitHub stars, which is a measure of its popularity among software developers. For context, Python, the language the LangChain toolset is written in, has fewer stars over a significantly longer time period: 51,500 stars over six years. ChatGPT itself captured one million users in just five days.
This head-spinning rate of change gives an advantage to startups, given their rapid iteration and integration of new tools and ideas. Some large healthcare software vendors that are infamous for relying almost exclusively on internally developed tools find themselves in a challenging situation. It’s impractical for them to build their own LLMs, as they would likely never rival the performance of commercially available options, and it would take forever. And since they are not used to relying on third-party software as part of their solution, they aren’t prepared for the rate of change at which these solutions are evolving.
For instance, just yesterday, LangChain had 18 separate commits (i.e., changes) to their codebase. That’s fast! Adapting to rapid changes and advancements requires a new level of agility.
We’ve recently heard announcements and partnerships from big tech and big healthcare IT. It will be interesting to see if these announcements produce real value in the near term, or if they are just a way to buy time for the vendor to figure out this rapidly evolving space.