Going to ask again about HealWell - they are on an acquisition tear and seem to be very AI-focused. Has…
Curbside Consult with Dr. Jayne 10/7/24
I had several friends attending the Becker’s Health IT + Digital Health + Revenue Cycle conference last week in Chicago. In sending me some impressions and notes, they all mentioned conversations around the topic of whether health systems should be able to monetize patient data.
Key areas where organizations might use de-identified patient data include generative AI, genomics and precision medicine, and pharmaceutical drug discovery. When I’ve discussed this with fellow physicians in the past, the question of ethics was usually at the top of the list. In recent months, however, the focus seems to have shifted to whether organizations can truly protect patient data, and whether or not there is a risk of it becoming re-identified by someone intent on doing harm.
Putting on my patient hat, the first issue I have with using patient data beyond the actual patient care is that of consent. Organizations may claim that they received patient consent, but did they really? Most large health systems are giving patients multi-page documents to read when they arrive. Those documents include a multitude of topics, from data sharing to billing assignment to consent for treatment. I’d be hard pressed to find 10 patients in the lobby at my local academic medical center who know definitively whether they have signed a consent for dissemination of their de-identified data or not.
At my last mammogram visit, the facility didn’t even offer me copies of the Notice of Privacy Practices or Consent for Treatment documents to review. The registration clerk simply pointed to a signature pad and said, “Now you’re going to sign indicating you received this and agree to it” and became irritated when I asked for copies of the documents prior to signing.
In my book, consent that is obtained in that manner is in no way a valid consent of any kind. Not to mention, the patient really has no other choice but to consent, in many cases. With insurance companies building narrow networks, patients may not really have a choice as to where they receive treatment and end up agreeing to whatever is put in front of them because they need care. One never hears the word “coercion” uttered when patients are at the check-in desk, but that’s essentially what is happening.
Another major issue for patients is the general lack of understanding of what HIPAA covers and doesn’t cover. Most people don’t know that consumer devices collect clinical information, but it’s not at all protected, and companies can do whatever they want with it for the most part. There have been recent concerns following shakeups at genetic testing company 23andMe as far as to what will happen to their data in the event of an acquisition or changes in leadership. With changes in state abortion laws, there are increasing worries about period tracker apps, fertility tracking apps, and other ways of capturing reproductive data. Between those two catalysts, I’m hoping that patients become more aware of the fact that their information is just out there. We all know that no one reads the terms and conditions when they sign up to use a new app.
Changing to my clinical informatics hat, I absolutely agree with the concerns around organizations’ inability to protect patient data. Recent cybersecurity events have shown that they struggle to protect fully identifiable data used for direct patient care, so what makes us think they’re using an equivalent level of rigor for de-identified data? There are plenty of articles out there that describe how easy it is to re-identify patient data, going back as far as 1997 from what I could identify with some quick searching. There are plenty of data-rich sources that are publicly available, such as voter registration lists.
Several colleagues posted to a local physician forum after they received data breach notifications stemming from the Change Healthcare hack. The words used by these physicians, who were impacted as patients, caught my attention. They were “crushed” and “stunned” that their information could have been impacted. It was an eye-opener for them, I guess.
We have all worked for, and been patients at, the same healthcare system over the last couple of decades. I know that my data has been impacted at least a half dozen times, including when a research coordinator had a non-encrypted laptop in the trunk of her car and it disappeared. It made me think that our organization probably does a bad job of making physicians aware of these incidents, when in reality, we are going to be a point of contact for concerned patients whether we like it or not. I’m trying to give them the benefit of the doubt, but at this point in the game, we need to all assume that none of our healthcare data is private or truly protected.
Speaking of privacy and confidentiality, fall is when many organizations require completion of annual compliance training updates. Although I’ve been through HIPAA and other compliance training in several dozen organizations over the last decade, I have yet to see one that addresses the fact that ease of access with phones and tablets has led to physicians accessing patient information in all kinds of places and with minimal privacy protections. I was sitting in a restaurant booth with a colleague a couple of weeks ago. She was waiting for some lab results on a patient and kept pulling out her phone to check and see if they were back. She generally has a hard time disconnecting from the office due to her specialty, and in a matter of minutes, I saw her entire patient schedule, several other patients’ labs, and some imaging go by.
I made sure I paid detailed attention to my salad while this was going on, but was flabbergasted that she thought it was OK to do this. Maybe she felt safe because I am a fellow physician, but given her overall track record of how she uses her phone, I would guess that she does this in other environments.
I mentioned it to her and she shrugged it off, saying she was sure “no one is looking at my phone” and justifying her behavior due to being in a high-stakes surgical subspecialty and needing to check in on patients. But it’s a snapshot into the cavalier attitude that many in healthcare have around protecting patient information. I’m sure she once watched a video that said “screens should be pointed away from prying eyes” but maybe making mention of specific environments where clinicians access patient data on their phones might be more impactful.
It will be interesting to see how patient privacy, consent, and monetization of patient information plays out over the coming years. In the meantime, think twice before you’re hitting the EHR during your kids’ soccer game.
What do you think about the monetization of patient data? Does your organization have a stance? Leave a comment or email me.
Email Dr. Jayne.
If I could give a thumbs-up to every individual paragraph here, I would.
(Twice on the reminder that “intrepid” data miners could get literal bounties for reporting abortions in Texas – though I guess, with the acknowledgement that your data is just not safe and not going to be, I should just accept that you just can’t tell your doctor and that’s that.)
Thank you for calling out the implied coercion of signing electronically a pad to give consent for pages of legalese. Have been in those awkward moments myself of asking for copies of the documents to which I am agreeing which only intensifies when having drawn through objectionable items and initialing, thereby finding a flabergasted staff that only can say, “well then we cannot see you today” abrogating their responsibility to an established patient.
But in fact, this is only one example of the same “click here” culture for Terms and Conditions which no one reads before installing a new app. . .
For one of my colonoscopies several years ago, I was given the consent form to sign while I was on the table waiting for the procedure to begin. What choice did I have? Could I have refused consent? Given that the procedure was only a minute or two away, there was no time to read, understand or ask questions about the consent form and the procedure.
After my experience a relative called her doctor to see if she could get the consent form sent to her before her colonoscopy; she was told they could not do that. And how clear is anyone’s thinking the morning of the colonoscopy given the preparation you have to go through?
This stuff drives me bonkers. It’s one thing to electronically (or physically) sign a document I skim, but being presented with a signature pad with absolutely no knowledge whatsoever of where it is being applied is infuriating. Even if the person at the desk tells me I’m signing the privacy policy, I have no way of knowing that’s what I’m actually signing. I’ve never heard of patient’s signatures being used improperly before, though, so I just sign and move on.
Yup, I recently received a stapled packet which included the signature page of the privacy policy: but not the policy itself. I had to ask for it and was provided a separate sheet. It was so odd that they hadn’t just included the policy together with the six other pages.
There is a principle in Common Law, that certain kinds of contracts are not enforceable. If particular conditions are not met then the entire contract is declared null and void.
I’d be interested in some legal interventions in these coercive agreements. It would be difficult to set up in a way that was administratively supportable. But based upon the comments and the original article? There are a lot of situations that ought not be tolerated.
1). You don’t get a copy of that agreement, to read and understand at your leisure? Then it isn’t valid;
2). Springing a long agreement on the patient at the last second? Then it isn’t valid;
3). You literally cannot see the agreement, only the signature line? Then it isn’t valid.
There are probably lots of other transgressions too, that’s just a start.