I was invited to yet another retirement party this week. It was again for a primary care physician who is leaving medicine at an age that is decidedly less than the oft-discussed 65.
Burnout played a role in every retirement gathering I’ve attended over the last couple of years. It’s sad to see so much knowledge and experience leaving the field. More than half of these physicians would have been interested in continuing to practice part time, but it sounds like their corporate employers weren’t terribly interested in trying to make that happen. The practices they left continue to be slammed and have wait lists for new patient appointments that are several months long.
Most of my local primary care physician peers are a number of years away from being able to retire. I struggle to think of one colleague who isn’t suffering from some degree of burnout.
When asked what might help the dumpster fire that is healthcare in the US, quite a few cite artificial intelligence as the answer. Just use ChatGPT to write your prior auth letters! Or insurance appeals! Or letters for emotional support animals, educational modifications for school-aged patients, Family and Medical Leave Act documents, and more! The enthusiasm that people voice about these solutions seems to be contagious, but it’s rare that those who are using it fully understand the risks of feeding protected health information into different solutions, or that they can be liable if they’re allowing staff to use AI solutions for patient management but aren’t doing 100% review of the output.
With this in mind, I was excited to read a special communication in last week’s Journal of the American Medical Association titled “Will Generative Artificial Intelligence Deliver on Its Promise in Health Care?” If the title alone wasn’t enough to catch my attention, seeing Dr. Robert Wachter listed as the first author definitely helped.
Wachter and his co-author Erik Brynjolfsson note that historically, it usually takes many years for technologies to deliver promised benefits. Because healthcare is such a complex environment, this can make the incorporation of new technologies even more challenging. They go on to say that generative AI is different, though, and has “unique properties that may shorten the usual lag between implementation and productivity and/or quality gains in health care.” They also note that not only are health organizations more receptive to the technology, but that many “are poised to implement the complementary innovations in culture, leadership, workforce, and workflow often needed for digital innovations to flourish.”
The latter is an interesting point, especially since I’m often working with organizations that struggle to implement “innovations” that are more than a decade old. These solutions may not be heavy on technology, but are often fairly straightforward people and process adjustments that have the potential to improve patient care, reduce staff and clinician frustration, and create more efficient interactions in the healthcare system. Often they are relatively inexpensive to implement, but require the sometimes elusive stakeholder alignment in order to bring them to fruition.
Given all the buzz around AI-related solutions, I’m starting to wonder whether we can slap a label on them that says “AI-driven” and use that as a way to convince people to take some steps towards making their organizations run more efficiently.
Turning back to the JAMA article, some interesting facts jump out. First, nearly one-third of the $4.3 trillion that is spent in the US each year adds little to no value. I’ve seen that first hand in the urgent care trenches, where patient demand for testing and imaging studies often overshadows the physician’s judgment, particularly when an organization places a high value on patient satisfaction scores. Clinicians are trained to use a variety of clinical decision support rules to determine whether someone needs an x-ray after injuring their ankle, or whether a child needs an imaging study when they fall off their bed. However, insistent patients or parents may push or escalate, resulting in thousands of dollars in healthcare spending that could have been avoided.
It feels like we’re rarely able to make clinical diagnoses anymore, relying on the history, exam, and our education and training. Instead, we have to perform laboratories to prove ourselves sometimes, even when the answer is very straightforward. One organization I worked at pushed clinicians to order unneeded medications that could even be harmful, in the guise of “patient satisfaction.” Needless to say, I frequently wound up on the wrong side of that organization’s quality reports, but at least I had my integrity.
Second, preventable harms are still a major problem in the US, with tens of thousands of deaths happening each year due to situations that could have been mitigated. These range from simple medical errors that might be prevented with the application of basic technology (such as allergy warnings that appear when medications are prescribed) or complex errors that result from multiple failures along the way. Those can be particularly hard to work through as a clinician, since there are often many steps where the problem could have been prevented, but the system failed regardless. Electronic health records were initially seen as a solution to these difficult situations, but some days it feels like they have created two new problems for every one that they solved.
The article goes in depth to describe “the productivity paradox of information technology,” where technologies fail to deliver value. One main reason for this is the flawed nature of many early versions of technologies and the need to have multiple iterations before a successful tool is achieved. The second reason, which the authors view as more important, revolves around “the processes, structure, and culture of the workplace.” I felt validated when reading that sentence since I’ve lived it so often while trying to help organizations with their clinical transformation initiatives. The authors note the need to often have multiple complimentary innovations to overcome the productivity paradox. It’s another way of saying that no silver bullet exists for solving a difficult problem.
They go on to explain some of the “particular challenges” of implementing technology in healthcare. These are the factors that so many companies fail to understand as they promise to fix healthcare or revolutionize the patient experience. These challenges include the highly regulated nature of healthcare, differing opinions on data ownership, the need to protect patient privacy, and the fact that all these factors at times interfere with each other.
They go on to list other challenges, including the fact that the EHR market is highly concentrated with only a few major players left. In contrast, parts of healthcare have a plethora of players, including clinicians, care delivery organizations, payers, employers, pharma, device vendors, government, and more. As such, new technologies are likely to progress when they can make improvements for multiple stakeholders rather than for just one subset of players in the industry.
Other challenges that they list include the fact that healthcare data can be messy depending on where it comes from (billing, clinical documentation, compliance) and that healthcare is constantly evolving, often through research and changes in practice. As such, AI tools that are based on historical patient data may not be applicable in the present and in fact might be dangerous.
Last, they note that healthcare is high stakes, with the very real impact on patients making it potentially harmful to do the “fail fast and iterate” approach that happens in other technology environments. We’ve all seen innovations that harm patients, whether it’s an inadequately studied drug, a faulty medical device, or an improperly implemented clinical decision support tool.
Despite the fact that previous AI technologies haven’t delivered, (IBM Watson, anyone?) the authors see several factors that may lead to improved solutions this time around. They cite the relative ease of use of generative AI as a positive, along with the fact that the technology can be delivered to users easily through devices they’re already using. The ability to interact with new solutions via application programming interfaces (APIs) is also a plus, as is the speed of evolution of the generative AI solutions themselves.
The authors believe that healthcare leaders are better prepared to consider workflow redesign than their predecessors, in part due to the presence of clinical informaticists (yay!) and those with experience in user-centered design. They feel that leaders have learned from past failures as well. They mention the irony that many of the problems that were created by prior digital innovations – such as documentation burden and the EHR inbox – may be addressed by new generative AI powered tools, which would be a lovely thing for all of us.
It will be interesting to revisit the premises of this article after we’re six months or a year down the road. Maybe by the time generative AI reaches its second birthday, we’ll be living in a world of smoother patient care, streamlined communications, and improved clinical quality, all thanks to the wonders of artificial intelligence. It’s more likely, though, that major improvements will still take years, but at least that will be faster than the decades of inertia we’ve all been living in.
The authors call on AI developers to address elements such as bias, safety, cost, and hallucinations. They note that regulators need to develop standards that promote innovation as well as safety. They state that most important is for healthcare leaders to “prioritize the areas where genAI can create the greatest benefits for their organizations, paying close attention to those complementary innovations that remain necessary and striving to mitigate the known problems with genAI and any unanticipated consequences that emerge.”
What do you think about the role of generative AI in coming years? Are we on the cusp of greatness, or heading down the road to ruin? Leave a comment or email me.
Email Dr. Jayne.
There is a principle in Common Law, that certain kinds of contracts are not enforceable. If particular conditions are not…