Home » Dr. Jayne » Currently Reading:

Curbside Consult with Dr. Jayne 6/12/23

June 12, 2023 Dr. Jayne 3 Comments

I was intrigued by a recent study published in in JNCI Cancer Spectrum that looked at how capable ChatGPT is when asked questions about common cancer myths and misconceptions. The study, performed with the Huntsman Cancer Institute, compared ChatGPT output against social media.

I understand the premise. Many patients are getting their information from social media, friends, family, and people they know on a personal basis rather than being able to learn key information from their care teams. Sometimes this happens because people may be receiving cancer diagnoses via their patient portal accounts due to immediate-release results policies stemming from governmental regulations. Other times it happens because patients are unable to reach their care teams when they have questions or they don’t feel that their team is listening to them.

Cancer is also a condition that leads people to leave no stone unturned as far as investigating treatment options and potential outcomes. It’s one of the scariest diagnoses you can receive, and even as healthcare professionals, we are rarely prepared for it. There can be a lot of uncertainty about treatment plans and sometimes even about diagnoses, and patients often seek additional information or alternative treatments as a way of trying to maintain control of their own health and lives.

Generally, physicians appreciate working with engaged and involved patients who want to understand the pros and cons of various treatment options. But a number of industry forces create pressure on that scenario, including time pressures, insurance limitations of treatment options, and availability of options in a particular geographical area.

In fairness, the study was performed shortly after ChatGPT became widely available, so it may not be entirely applicable today. Researchers used a list of frequently asked questions that they sourced from the National Cancer Institute’s “Common Cancer Myths and Misconceptions” website. They found that 97% of the time, the answers from ChatGPT were in agreement with the responses provided by the National Cancer Institute.

In contrast, approximately one-third of social media articles contain misinformation. Distrust of medical institutions and medical professionals has grown exponentially since the beginning of the COVID pandemic, and patients may decide not to pursue standard treatments based on information they’ve heard from friends or family or might have found online. This can lead to negative outcomes for patients, who may expose themselves to increased mortality rates when selecting unproven treatments.

Even when considering medical professionals as a source of information, I’ve seen instances where misinformation can be spread. Sometimes patients consult neighbors or friends who might be physicians, but who are in specialties nowhere near the patient’s area of need. I’m not even an old timer, but I know that the treatments for various cancers have progressed exponentially since I last cared for patients with those diagnoses. I’m always careful to refer patients back to their oncologists, hematologists, or surgeons, but not everyone does that. I’m part of several Facebook groups that have exclusive physician membership, but we still see bad answers circulating when physicians who are patients themselves pose certain questions.

For physicians who are actively caring for cancer patients, knowing that patients might receive medical misinformation can increase their feeling of burden in delivering care. One of my colleagues feels she can never disconnect from managing patient portal messages because she feels that if she doesn’t answer the patient’s questions promptly, they will be more likely to go down the proverbial internet rabbit hole, leading to greater stress for the patients and their families. When we discussed having boundaries around these kinds of interactions so my colleague can have a break, she said she’s thought about it, but feels that correcting the misinformation later actually requires more work and emotional effort than just being continuously available to field questions. It’s a difficult spot for clinicians to be in when they feel called to serve their patients so broadly.

The study involved a blinded review of the answers to the questions, grading them not only for accuracy , but looking at word count and Flesch-Kincaid readability grade level factors. Answers from both sources were less readable than recommended by most health literacy advocates, but the responses from ChatGPT tended to use more words that led to a perception of hedging or uncertainty. The questions evaluated were striking, and included items such as:

  • Is cancer a death sentence?
  • Will eating sugar make my cancer worse?
  • Do artificial sweeteners cause cancer?
  • Is cancer contagious?
  • Do cell phones cause cancer?
  • Do power lines cause cancer?
  • Are there herbal products that can cure cancer?
  • Do antiperspirants or deodorants cause breast cancer?

I have to say that I have heard at least four of these questions from friends and family members in the last month or so,  and I am not surprised that they made the question set. The issue of antiperspirants and breast cancer risk comes up often in some of my social media channels, as to the questions about eating sugar and using herbal remedies.

Full documentation of both the National Cancer Institute answers and the ChatGPT answers are included in the article, in case you’re curious.

In addition to the question of accuracy, there’s also the question of specificity. Researchers noted that while the ChatGPT answers were accurate, they were also more vague than the comparison answers from the National Cancer Institute. This could lead to patients not thinking the answers were valid, or to them asking additional questions in clarification.

There was also concern about patients who might ask ChatGPT questions about cancer that are less commonly asked, and which might not have a large body of knowledge to form a training database. The study was also limited to the use of English, which has an impact on its applicability to broad swaths of the US and the world. As a patient and knowing what I know about the propensity for ChatGPT to hallucinate, I don’t think I’d want to go there for my medical information.

Given the newness of the technology when the study was performed, it would be interesting to see how newer versions would perform in the same circumstances. There are a couple of possibilities. It could become more accurate, or it could go completely off the rails as we’ve seen it do with some queries. Additionally, the content used for the models typically only runs through 2021, so current data might influence the results. I hope researchers continue to look at how ChatGPT might be useful as a healthcare adjunct, and where it might serve patients best.

What do you think about ChatGPT as a debunker of medical misinformation? Will it tell patients to inject bleach to kill viruses, or declare it to be an insane strategy? Leave a comment or email me.

Email Dr. Jayne.



HIStalk Featured Sponsors

     

Currently there are "3 comments" on this Article:

  1. These are great questions. I’d add more complexity, alas: What of conflicting advice from REASONABLE sources. That is, NOT from the internet laitril folk, but from two different docs, each with legit standing. Say…to start chemo first or do surgery first .

    And who should the pt listen to: her PCP, oncologist 1 or oncologist 2.

    And let’s say the PCP is a newbe or not well informed on this XYZ form of cancer?

    • Well, we disagree then. I am startled at the naivete of the questions list, to which the answer to all is an easy No!

      You can accept that an open environment for questions is a good thing. You can promote patient engagement with their care, and those of dependent family members. We want patients to tell us what is on their minds.

      However one has to ask: Why are all these questions from the “out there” sector of information? Where are the questions that have some truth and reality as part of their makeup?

      BTW, there is a tiny, limited exception to #4. There is a highly unusual cancer of Tasmanian Devils that is transmissible (but only to other Tasmanian Devils). Now, unless the patient base includes some Tasmanian Devils… it’s still a hard No.

      I take my hat off to the patient educators who can field such questions without bursting into gales of laughter.

  2. I serve on a national organ transplant board and I am a heart transplant recipient myself. Although educated by my disease, I am not a clinician.. I subscribe to a number of groups and pages and I am astounded (but not really shocked) by the significant level of misinformation exchanged on these sites. Wherever applicable, I always steer the the questioner back to their care team. As AI proliferates throughout our healthcare delivery system, we must find ways to “validate” the information provided to patients who are often in their most vulnerable moments of their lives. Until we have the systems in place to do this, I would suggest that all unvetted AI generated information be required to identify itself as “AI Generated” so patients understand the source of their information. I fully expect that the marketplace will start to generate malpractice type litigation against these digital tool providers as patients unwittingly consume and act on inaccurate and/or dangerous AI generated insights. I know that the future of healthcare system will have to depend on these tools to succeed and believe AI will improve over time, but for now we definitely need some “truth in labeling” while we continue with this transition. It really could be a matter of life or death if we don’t.

Text Ads


RECENT COMMENTS

  1. I've figured it out. At first I was confused but now all is clear. You see, we ARE running the…

  2. “My bad” does not mean I’m sorry. It means: I screwed up. Mea culpa. Keeping a little Latin in our…

  3. I'd like to circle back to the "slow health tech news day trendy terms" to solution some more synergy on…

  4. RE: HHS positions. I assume the federal interview process would last way past November, but seems to be potentially tricky…

  5. This sounds quite odd, and it would be good to know if this is some different pharmacy module than the…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Industry Events

  • An error has occurred, which probably means the feed is down. Try again later.

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.