Home » Interviews » Currently Reading:

HIStalk Interviews David L. Meyers, MD, Emergency Physician Leader

August 28, 2019 Interviews 4 Comments

David L. Meyers, MD is retired from a long career in clinical medicine. He continues to consult, serves as a board member of the Society to Improve Diagnosis in Medicine, and is pursuing a master’s degree in bioethics at the Johns Hopkins Bloomberg School of Public Health.

image

Tell me about yourself.

I’m an emergency physician. I trained at Cook County Hospital in internal medicine, before there was a board exam. Emergency medicine was emerging as a specialty. I stayed in Chicago and went right into emergency medicine practice instead of doing internal medicine. I dabbled a little bit in internal medicine at Northwestern and did some research, but basically I’ve been an ER doc all my life.

I ran an ER in Chicago for about 20 years and then came to Baltimore to run an ER here at Sinai Hospital . After a few years, we brought in EmCare, a private medical management company, to staff the place and hire the docs. I went to work for them and did a bunch of executive-type things over the next 10 years, including running a malpractice insurance company operation, their risk management claims management. It was a publicly traded company at the time and still is.

I continued to practice clinically once a week, commuting to Dallas for five years and coming back here after my Friday night in the ER so I could keep my hands in the nitty gritty of what’s really going on in the field. 

I retired a few years ago and decided I wanted to pursue medical ethics in more depth. I had been on ethics committees all my clinical career and found it really interesting and challenging with what is going on in healthcare. I’m not sure what I’m going to do with it. I have some ideas about the discrepancy between business ethics and bioethics. There may be some opportunity to blend those kinds of things to have a more humane and better healthcare system.

How extensive is misdiagnosis and how do you assess the market for artificial intelligence to improve it?

Huge and huge. Misdiagnosis or diagnostic errors make up at least 50% of all harm-related medical errors. Most of the reliable information is based on claims data from medical malpractice, which is not a great marker for total number of diagnostic errors. But the ones that people are really concerned about are those that cause harm – significant disability, loss of limb, loss of the ability to work, and even death. Diagnostic errors are the most frequent cause of those high-harm results.

A recent study published out of by Hopkins David Newman-Toker and his associates looked at what turned out to be the Big Three conditions. They went to a big insurance database called CRICO, which insures about 400 hospitals and healthcare systems around the country, including Harvard and Hopkins and a bunch of other very prestigious academic medical centers. They looked at the claims data from this database to identify those conditions that were most often associated with high harm, that is, these disabilities and death. The categories turned out to be infections, of which sepsis, certain other paraspinal abscesses, and four or five other things were very prominent;  vascular conditions, mostly around strokes and heart attacks and similar kinds of conditions; and cancer. They called these the Big Three that are responsible for most of the significant harm-related categories.

This study is one of the best to flesh out how big of a problem this is. The total number of serious harm-related incidents ranges from 40,000 to up to 1 million, depending on how the analysis is done and what the source database is. It comes down to that a diagnostic error is associated with 5-7% of all patient encounters.There are hundreds of millions of diagnostic encounters every year. You’re talking about a large number of errors and then correspondingly large number of serious errors resulting in harm.

Is that misdiagnosis or failure to diagnose?

It’s a combination. It uses a definition of diagnostic error that came out of the Institute of Medicine, now called the National Academy of Medicine, that published a big monograph study on diagnostic errors in 2015. Their “To Err is Human” in 1999 said that the biggest problem is medication errors. That was the illusion of what was significant. While there were lots and lots of medication errors, they weren’t so much the cause of significant, harmful outcomes. Only in the last five or six years after this study was published was there an acknowledgement that the biggest harm-related cause was on the diagnostic side of things.

Is medical imaging analysis the most potentially useful deployment of AI in the care setting?

It is possible for an intelligent machine to look at millions and even billions of images in a very short period of time and then learn, through these neural networks and other mechanisms, how to recognize what’s a man, what’s a woman, what’s a cat. Companies have produced X-ray assistive artificial intelligence devices that can look at millions of images and be more accurate than radiologists. Sinai just got one of these artificial intelligence image analysis tools for looking at brain scans for hemorrhages. The studies show that Aidoc performs better than a panel of radiologists.

That’s not just in radiology, but in dermatology and other kinds of image recognition things. That’s where the first successes have been shown to be pretty good and where the greatest potential is right now, Then it could be expanded it to other areas where the appearance of something tells you what’s going on, such as diagnosing depression by looking at facial images.

In the the study of diagnosis, most errors occurred in the realm of cognition and cognitive errors — not considering a condition as the cause of the symptoms, not ordering the appropriate tests, or making decisions along the way that weren’t so obviously putting together a whole lot of data and saying, here’s the diagnosis.

At some point, I suppose we’ll have a Tricorder where we just put a bunch of information in and pass the patient through a CT scan type thing and it will come out with the diagnosis. But that is pretty far in the future. The thing now is, how are we going to help doctors be smarter cognitive players in the diagnostic process and assist them? 

Consider prompts and reminders. Can Epic, Cerner, or some of these other EHRs develop ways that the electronic record can say, “This is a middle-aged male with back pain who’s got hypertension and had pain radiating to his leg.” Then set up a tool that says, “This could be a patient with a significant risk, maybe 5% or more, of a leaking aortic aneurism.” Put that prompt on the screen to the doc to say, “Have you considered a AAA rupture or leakage in this patient?” 

We’re not there yet. They’re apparently not able to do that, although it seems that the technology is there. There’s a diagnosis tool called Isabel. It’s free on the Internet. You put in your symptoms and it will generate a differential diagnosis list, the things that ought to be considered as possible causes of the symptoms you’re having. 

The potential is there, but so far it hasn’t really been adequately exploited. Most of the effort seems to be looking at these deep learning things, where neural networks are used to teach machines how to recognize a mass on an x-ray or depression in a face or something like that.

Some of that is available now in the form of evidence-based clinical decision support, but doctors don’t always embrace it. What dynamic will need to be overcome to get doctors to see AI as a partner rather than a threat?

There’s still a lot of resistance. Physicians may be skeptical about how big of a problem diagnostic errors are. A lot of studies have shown that doctors are confident about their diagnoses even when they’re wrong. There’s this attitude that, “Maybe there’s a big problem, but I am not one of those problematic people. I’m above average.” Everybody thinks they’re above average in their diagnostic capabilities.The literature is telling us that it ain’t so, but getting doctors to believe it is another whole thing.

Then there’s the cost of all these AI-type things. EHRs themselves, as bad as they are, are a huge expense for hospitals. They’re already struggling to make theme cost-effective. Adding additional bells and whistles that the doctors may not even accept is a risky kind of proposition.

What about the ethical issues of AI in healthcare that have received widespread coverage lately?

Artificial intelligence tools are created by humans who have their own biases. There is recognition that those biases can be built into the tools of artificial intelligence. They aren’t yet totally objective. Health equity issues that plague humans and our biases may be built into those systems. Not consciously, but because it comes from human creation, it’s automatically saddled with human biases, even though they can be minimized. We haven’t figured out how to eliminate them yet.

What technologies hold the most promise for improving outcomes or cost?

In the long run, artificial intelligence is probably the key to better care and lower costs. But with regard to timeframe, I’m skeptical about whether we’ll be doing this on earth or doing it on Mars. It will be decades in the making for this to come to a point where it’s having such an impact, although imaging analysis has a very reasonable timeframe in the near future to make a difference. If we can have better imaging analysis and diagnosis, that will contribute to a significant reduction in harm and lower the cost of care.

There are predictive analytics systems that look at masses of records, collecting them and putting them into categories for making predictions. The Rothman Index, which I think is mostly done manually by nurses entering information into the patient record multiple times per day, looks at those inputs and recognizes patients who are potentially at risk. It gives an early warning to the staff using those 20 or 30 parameters from the nursing notes, vital signs, and other electronically collected stuff. It says, “This patient is going to need a rapid response intervention in the near future unless you intervene with some technique now.”

By aggregating millions of patient records, I think we’ll be able to predict who isn’t taking their medicines, using an Apple Watch type thing or something like that. We could say, “The patient isn’t taking their medicines. The patient gained weight. We have to send somebody out there to intervene. Maybe their heart failure is getting worse.”

That is where the potential for improving the care and reducing the cost is going to be. These predictive analytic tools, collecting data in the background and telling the providers, “Pay more attention to this guy. He seems to be on the verge of deteriorating.”



HIStalk Featured Sponsors

     

Currently there are "4 comments" on this Article:

  1. Here is a question that I wish I could have asked Dr. Meyers: “How much of misdiagnosis is due to the provider not having the right information vs how much is due to a Dr. not following clinical guidelines?” Bonus Question would be “How much of that right information is something that the best possible EHR could surface vs how much would require something from the patient like a test or even a exploratory surgery.

    • How much of “missed diagnosis” is a result of physicians not believing patients when they report their symptoms?

      • It’s probably a decent amount if the patient is black or female, but technology can’t do anything about that.







Text Ads


RECENT COMMENTS

  1. It seems that every innovation in the past 50 years has claimed that it would save money and lives. There…

  2. Well, this is predicting the future, and my crystal ball is cloudy and cracked. But my basic thesis about Meditech?…

  3. RE Judy Faulkner's foundation wishes: Different area, but read up on the Barnes Foundation to see how things work out…

  4. Meditech certainly benefited from Cerner and Allscripts stumbles and before that the failures of ECW and Athena’s inpatient expansions. I…

  5. Yes, Meditech will talk your ears off about Expanse. There are multiple factors at play here which undercut both Meditech…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.