Home » Dr. Jayne » Currently Reading:

Curbside Consult with Dr. Jayne 6/5/23

June 5, 2023 Dr. Jayne No Comments

I spent the weekend largely unplugged, catching up on some household projects and indulging my need for quality time in the kitchen. My last-minute run for pickling and baking supplies created an interesting assortment of items at the grocery checkout, but when you like to do things old school, sometimes you really do need three kinds of vinegar and a jar of bay leaves.

My pickling efforts were slightly more successful than the baking one, which resulted in the first time I’ve ever had to admit that it’s possible to have too much sugar in a cake. When I finally reconnected this weekend, my inbox seemed to be forming a theme around the topic of healthcare IT gone bad.

First, there was the story of the National Eating Disorder Association chatbot being decommissioned after it recommended harmful behaviors, including dieting and calorie restriction. The organization at least owned the problem, stating that the advice being given was “against our policies and core beliefs.” Apparently the chatbot, called Tessa, was created around proven cognitive behavioral tools that have been shown to reduce eating disorders. However, it appears that programmers may have tried to make it work more like ChatGPT and ended up running off the rails. The original tool used pre-programmed responses and was not intended to be adaptive or to use AI features.

It’s been interesting to watch chatbots evolve over the last couple of years. Quite a few vendors claim to have created AI-enabled chatbots, but when you look behind the scenes, they end up being sophisticated (or sometimes not so sophisticated) decision trees. I’ve seen some alleged healthcare chatbots that are constructed by teams that don’t even have clinicians on them, which is truly worrisome. It’s always surprising to see the logos of organizations who have bought into the hype and probably never asked to speak to the clinical person behind the proverbial curtain.

When ChatGPT came to the forefront in recent months, I saw several companies try to leapfrog good design and development principles in an effort to be able to say that their product was using the technology. I’ve worked with enough technology organizations and on enough different projects to know that trying to cut steps out of the software development lifecycle is never a good idea.

The steps that organizations typically try to cut are the ones that are the most critical in my book: planning, analysis, and testing. They forget that the whole point of the process is to be efficient from both time and cost perspectives. When you rush to market, you usually end up paying for it on the back end with broken functionality and unhappy users. The piece that it feels like people forget though is that when you’re in healthcare IT, that can translate to patient harm. Developers always need to remember that regardless of whether you call them users, consumers, or patients, the person on the other side of the code is someone’s parent, child, friend, or loved one.

The next story wasn’t about AI run amok, but was about more than 400 Grail patients receiving notices that they may have cancer. The company immediately pointed fingers at its third-party telemedicine vendor, PWNHealth. In digging into the details of the issue, more than half of those receiving the erroneous letters hadn’t even had their blood drawn.

The test in question is Galleri, which can screen for 50 kinds of cancer through a single blood draw. Large healthcare organizations like Mercy have jumped on board with it, offering the tests on a cash-pay basis even though they aren’t part of guidelines-based recommendations. The test costs $950, and if I had paid that kind of money, I would be doubly aggravated to receive an erroneous letter before I even had my sample collected. I had heard of the test when Mercy first started advertising it, but didn’t realize until I read the articles this weekend that it has not completed human clinical trials. There’s a study in the UK that’s at the halfway point, though. Despite that, more than 85,000 patients have spent the money to have the test performed, with only a handful of insurers providing coverage.

I’ve been on the other side of an erroneous medical testing result and it’s a horrific experience, leading you to wonder even if your corrected result is valid. In my case I had my pathology slides re-read by an outside pathologist because I didn’t know which reading to trust. Not every patient has the knowledge to ask for that or the resources to pay for it. Also in my case, the test orders were placed by a local physician who knew me well and with whom I had a relationship, which was a great support as we worked through the issue. Grail, whose owner is DNA-sequencing equipment Illumina is already under fire from regulators in both the US and Europe due to monopoly concerns. It will be interesting to see how this unfolds.

The third story wasn’t about healthcare IT as much as about AI in general, looking at specifically how AI would compare to humans on judging whether rules have been broken. A study done by Massachusetts Institute of Technology examined how AI would handle such things as a post violating a site’s rules or a dog being in violation of apartment rules. Researchers concluded that since AI can be trained on data sets that don’t include human validation, results may skew more harshly. A researcher in the field, Professor Marzyeh Ghassemi, is quoted as saying, “Humans would label the features of images and text differently if they knew those features would be used for a judgment. This has huge ramifications for machine learning systems in human processes.” Definitely something to think about when it feels like everyone is clamoring for more AI.

image

I would be remiss if I didn’t say happy birthday to the HIStalk team as the healthcare IT universe celebrates its 20th anniversary. One of my vendor executive friends recommended it to me when I first started my healthcare IT journey, and I never dreamed I would be part of the team. It’s been quite a ride with a lot of ups and downs in the industry, and I still remember sending my application to join the team by way of my trusty BlackBerry. Looking through old posts and revisiting what we thought was wild and crazy at the time, some of those news items pale in comparison to the issues of today. Here’s to the future of HIStalk as it continues to chronicle our topsy-turvy industry and to be everyone’s favorite source of healthcare news, opinion, rumors, and gossip.

Email Dr. Jayne.



HIStalk Featured Sponsors

     

Text Ads


RECENT COMMENTS

  1. Well, this is depressing. It's painful to read that VCs would rather throw money at someone who burned a massive…

  2. Well now if you know that Epic is paying KLAS, do tell, and give evidence! Or is this another Oracle…

  3. Well that's a bad look as the Senators contemplate filling in the House gaps in the VA Bill

  4. Fear of scorn from Mr HIStalk is so great at Oracle Towers that the webinar recording linked to in the…

  5. House lawmakers should have bought a squirrel ;-)

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.