Home » Book Review » Currently Reading:

Book Review: Deep Medicine

March 27, 2019 Book Review 5 Comments

image 

Eric Topol is one of the highest-value of the few people I follow on Twitter. He consumes information voraciously and summarizes it well without talking down to his audience. He loves technology, hates EHRs, and weighs in on the practice of medicine even though I suspect his practice isn’t very much like that of the typical doctor or even the typical cardiologist. He is quick to point out those seemingly great ideas that have had zero real-world validation in a healthcare setting. He also holds researchers accountable for proving improvement in outcomes – just making a lab value move in a seemingly good way doesn’t cut it with ET.

I have mixed feelings about this book. Topol provides an exhaustive (and sometimes exhausting) review of all the work that’s being done with artificial intelligence in healthcare. Trust me, it’s a lot. The downside is that this book was nearly obsolete the moment the first copies rolled of the presses, meaning I had better get return on investment for my $20.69 quickly.

“Deep Medicine” is a firehose of who’s doing what with AI. By nature, a lot of that work is early-stage, experimental, and unlikely to see front-line use for a long time. Most of all, we have no idea of how it will integrate with the US healthcare industry (and make no mistake, it’s an industry). We’re not really that much different than other industries no matter what we would like to believe. 

I found the book somewhat of a chore to read. It has some personal stories, a bit about the history of medicine, background about companies, and of course who’s working on healthcare AI. I didn’t find it conclusive, but then again it really can’t be so early on.

Will AI Really Make Healthcare Human Again?

The subtitle “How artificial intelligence can make healthcare human again” sounds good and probably draws readers who are less interested in the nuts and bolts of AI. But to me, the book fails to deliver a convincing reason that Topol thinks that will actually happen.

I didn’t gain any confidence that healthcare will be even a little bit more human just because AI might save clinician time. If “making healthcare human again” was a business priority, we would have done it already, AI or not.

Topol expresses hope that doctors who are “given the gift of time” will be allowed to use that time to practice medicine the way they really want, to get personal with patients and to focus on their stories. That ignores the fact that most doctors these days are assembly line workers paid to treat ‘em and street ‘em in whatever way maximizes billing. It’s questionable whether the gift of time also offers the gift of higher income, and safe bets are always to assume that people do whatever it is that rewards them financially.

The other issue is that given Topol’s rigor in demanding that outcomes be proven, we don’t know that spending more time with patients in “deep medicine” actually improves outcomes. We don’t even know that patients want such attention. They seem happy with the urgent care model of dropping by with a problem and leaving with a prescription. AI could amplify the impersonal nature of those interactions, pushing patients to be triaged by chatbots or kiosk-based questionnaires. We don’t know whether that would make overall outcomes and quality of life issues better or worse.

I don’t recall any industry where the goal of automating the factories was to make workers happier or more self-actualized. Mostly it’s a reason to hire fewer of them or to restructure their work into something else that’s profitable. Assuming that healthcare is different is dangerously naive.

AI Hasn’t Been Tested on Humans

This is the most important reminder of the book. The AI work being done is interesting, but unproven. What works in a lab doesn’t necessarily work in an exam room. What works in analyzing heaps of data doesn’t necessary translate well to the frailties and idiosyncrasies of humans in their time of medical need.

It’s an easy leap to become overly exuberant when reading articles claiming that AI reads images better than radiologists, that somebody’s AI system passed a medical board exam, or that IBM Watson Health is smarter than an individual clinician. None of this has been studied and proven effective in the real world. Maybe it could improve outcomes or reduce cost,  but that’s just conjecture. A lot of those systems were rigged to do one thing, like Watson winning at “Jeopardy” only because it memorized Wikpedia, which is were the show’s staffers get most of the questions.

AI Is Good at Recognizing Patterns

Topol says properly trained AI can recognize patterns better than humans. Medical work that involves pattern recognition – diagnostic radiology and some aspects of dermatology and pathology – could perhaps be performed better by machines, leaving those doctors with time to perform other value-added services (if they can find them and if someone is willing to pay for them).

How Doctors Think

Topol has interesting thoughts on the Choosing Wisely initiative to get doctors to make better-informed decisions. He says it was a noble effort to get medical societies to define low-value tests and procedures, yet they are being ordered just as often even now. He gives these reasons:

  • Doctors overestimate the benefit of what they do
  • No mechanism exists to educate them
  • Compliance can’t be measured
  • No reward is offered for complying
  • Doctors think it’s OK to perform questionably useful surgeries as long as they aren’t likely to be harmful

The book says that doctors are burned out by EHRs that contain inaccurate information and don’t share information. He gives those doctors a pass in simply blaming EHR vendors rather than those who select, implement, and use EHRs, often with the specific goal of not sharing information and not being willing to correct mistakes, especially those the patient could easily identify.

I’ll be honest in saying that I don’t trust the EHR commentary offered by authors like Topol and Bob Wachter, MD. They are often impatient in demanding an easy answer, like making EHRs as easy to use as Facebook, ignoring the fact that EHRs are designed to meet the requirements of our screwed-up health system.

I do like this idea from Topol – get the patient’s consent to make an audio recording of their visit, have it transcribed, and then turn that into an office note that doctor and patient review together. Key point – auto-delete the recording in 24 hours to minimize malpractice concerns.

Topol says doctors diagnose by reacting to a few patient descriptions and use internalized rules and experience to arrive at a conclusion. Their diagnostic accuracy rate is nearly perfect if they figure it out within five minutes, but it drops to 25 percent if they have to think longer. Topol also makes this point, which seems to conflict with the theme of the book – diagnostic accuracy doesn’t improve when doctors slow down and think more deeply. Clinicians who were “completely certain” about their diagnosis were wrong 40 percent of the time, based on an autopsy’s cause of death.

The #1 reason a diagnosis results in a malpractice lawsuit is that the doctor didn’t consider the diagnosis that was eventually found to be correct. Doctors say they could improve given better chart documentation.

The challenge for doctors is that they see a small number of patients, often of specific demographic composition. Personal experience can’t stack up to analyzing large patient data sets. This is an important point. Doctors don’t consistently incorporate evidence into their practice. They also can’t see their own deficiencies.

The assumption made here is that lack of accurate diagnosis is a big problem and AI can improve it. I’m not so sure from a public health perspective that it’s the most important problem to solve, although if AI can plow through the patient’s record, the literature, and data about similar patients to improve diagnostic accuracy under the doctor’s supervision, then that’s certainly a win.

Medicine versus Self-Driving Cars

I liked Topol’s comparison of self-driving cars to medicine. The steps are:

  • Level 1 – driver assist, such as warnings to stay in the lane
  • Level 2 – partial automation, such as automatic speed and steering control
  • Level 3 – conditional automation, where the car drives itself but with human backup
  • Level 4 – high automation, where human backup is not required, but it works only in limited circumstances
  • Level 5 – full automation, where the car drives itself in all circumstances with no human involvement

Topol doesn’t expect medicine to get past Level 3. The clinician will always be personally involved to some degree.

Where AI Could Change Physician Roles

  • To initially read radiology images and classify them as normal or abnormal, which given the large number of imaging studies, would save time and allow radiologists to change their role from being “the reader of scans.”
  • To analyze surgical, cryopathology, and possibly dermatology images, where conformity across pathologists is lacking and error rates are high. The demand for “microscopists” should decrease.

In this regard, Topol suggests combining radiology and pathology into a single discipline of “information specialists” instead of “pattern recognizers.” That’s an interesting thought, although again tinkering with the lucrative incomes of doctors who are backed by politically astute societies usually doesn’t work.

The Economic Disparity Question

My overriding feeling in reading this book is that like much of healthcare, the benefits of AI won’t be spread evenly. You have the challenge of making sure that AI is trained given a broad set of demographics to avoid bias based on location, race, economic status, etc. but those people are already underrepresented in the healthcare system. AI can’t fix that.

AI could also be like self-monitoring tools such as the IPhone’s arrhythmia detection. Not everyone can afford an IPhone, is motivated to use it for self-monitoring, or has a clinician on standby to respond to the hypervigilant monitoring of the economically well off. On the other hand, we don’t have the research to know if those tools have any effect on outcomes or cost anyway. They sound inherently good, but so does robotic surgery, which Topol notes has done nothing to improve key outcomes.

My Conclusions

This is a fairly interesting book, assuming you like deep literature and news searches summarized loosely into a sometimes unconvincing narrative about AI in healthcare. Topol doesn’t follow the Silicon Valley mantra that AI will eliminate jobs, but instead lays out ways it could help rather than replace clinicians. That’s a compelling but simplistic view of how our healthcare system works.

The underlying assumptions are far from certain. We’re a profit-driven healthcare system, and attempts to wrest that profit back in the form of reduced costs rarely work. We also don’t know what patients want or what really moves the outcomes needle, so just throwing AI at interesting healthcare problems isn’t necessarily a huge step forward.

There’s also the question of who’s willing to pay for all this technology, which is being developed by startups and tech giants that expect hockey stick growth and endless profits. What they want may be directly at odds with what patients want.

Also in play is whether Eric Topol the exuberant futurist can represent the average frontline clinician whose day looks a lot different than Topol’s. It’s nice that he has the time and resources to write a book about AI and paint a picture of medicine that incorporates it, but I’m not so sure his worldview is accurate for the industry, especially the business aspects of it. He’s made himself an expert in this narrow AI niche that may or may not make him the best person to assess its use. People with hammers are always looking for nails.

We already have a lot of problems to fix. We’re probably not choosing medical school classes optimally or training doctors the right way. We are certainly not compensating them for doing the right things, and a fee-for-service system encourages practicing medicine that is clinically unsound but financially desirable. We don’t really know what patients want, or how they see the role of a PCP (if at all). We have ample evidence already and much of it isn’t being used on the front lines to make clinical decisions.

In short, while judiciously applied AI might provide some modest diagnostic and efficiency gains, I remain unconvinced that it will transform a healthcare system that desperately needs transforming.



HIStalk Featured Sponsors

     

Currently there are "5 comments" on this Article:

  1. My Whitecoat hypertension just turned into Watson Probe hypertension.

    Excellent review, good thoughts. I often speak to my Silicon Valley friends who crow about Google/Apple/IBM/AI moving into the healthcare space as a silver bullet to all the issues. After listening to them talk about the efficiencies and accuracy that would surely follow, I remind them that healthcare isn’t just an actuary table of outcomes, these are “frail and idiosyncratic humans in their time of medical need.”

    We quickly devolve into the Star Trekesque talk of “what makes a person special to deserve x% more attention/resources”, which is another point of AI that will need to be carefully watched. You beautifully call out that under represented populations could skew the AI’s Machine Learning as it caters to the more affluent, imagine adding a propensity to pay component to its decision making…

    Overall, I agree with your conclusion, and second that the underlying FFS needs to be removed and replaced with VBC/HMOv2 before we can implement these things ethically.

  2. Thank you for the very insightful and unfeigned review of Deep Medicine. The comment that stuck out most to me was:
    “I don’t recall any industry where the goal of automating the factories was to make workers happier or more self-actualized. Mostly it’s a reason to hire fewer of them or to restructure their work into something else that’s profitable. Assuming that healthcare is different is dangerously naive.”

    That statement alone speaks volumes to the inherent conflict of motivations between payers, providers, and suppliers. I suspect that most of us do not care to be compared to historical images of workers living in the industrial revolution age to churn out products from raw materials only to be replaced overtime with some form of automation. But, that analogy has played out over and over again regardless of the the industry as technology advances are introduced.

    I choose to focus on the young and their desires, wants, needs, and motivations with a healthy knowledge of my past mistakes and lessons learned to guide them. Transformation in our society begins and gains momentum with them; not with those that are allowed themselves to be comfortable with their selves, who choose to protect and not take risks, and who find no motivation to adapt or improve as is the case for many of the experienced leaders in our industry. It is much easier to complain than it is to collaborate and solve one problem at a time in a continual state of seeking to do something even better than you did the day before.

  3. Good review, agree with much of it, As I have posted many times on this blog. At most medicine is 50% science, 50% art. More so for cognitive medicine. So AI has a VERY long way to go.

    That being said…my prediction is around 2028 get ready for the MUHAIA (MooHi) …Meaningful Use Healthcare AI Act.

  4. “I do like this idea from Topol – get the patient’s consent to make an audio recording of their visit, have it transcribed, and then turn that into an office note that doctor and patient review together. Key point – auto-delete the recording in 24 hours to minimize malpractice concerns. ”

    In particularly challenging scenarios, use the recording to get a second opinion from a doctor from a different demographic. Maybe doctors could have randomly assigned buddies from a pool they all participate in.

    “To initially read radiology images and classify them as normal or abnormal, which given the large number of imaging studies, would save time and allow radiologists to change their role from being “the reader of scans.”

    Initially or always for a percentage of tests, it might be a better idea to only give the AI verdict *after* the radiologist has given their opinion. You don’t want the radiologist to start being lazy/biased and lose their diagnostics chops either.

Text Ads


RECENT COMMENTS

  1. Thanks, appreciate these insights. I've been contemplating VA's Oracle / Cerner implementation and wondered if implementing the same systems across…

  2. This is speculation, but it's informed speculation. There are trouble spots to look out for that are likely involved: 1).…

  3. "HHS OIG rates HHS’s information security program as “not effective” in its annual review, the same rating it gave HHS…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.