Home » Dr. Jayne » Currently Reading:

EPtalk by Dr. Jayne 9/17/20

September 17, 2020 Dr. Jayne 2 Comments

clip_image002 

The Office of the National Coordinator for Health Information Technology, in partnership with the Office for Civil Rights, released an update on Wednesday for the HHS Security Risk Assessment (SRA) Tool. Performing the SRA is required under HIPAA, and in my experience, many small and medium sized healthcare organizations struggle with it. The revised tool includes some user interface and navigation tweaks, as well as options to export reports. There was a corresponding webinar to educate users on the tool, but since I received less than 48 hours notice, I couldn’t make it work with my schedule.

I’ve not been a fan of the tool in the past since it is really just an electronic way to store a lot of manual work. People like it because it’s free, though, although I’ve found you get what you pay for. ONC’s SRA Tool stores data locally, which creates problems when the person responsible for your SRA goes out on medical leave or is otherwise unavailable (ask me how I know). Commercial solutions that are available store data either with the SRA vendor or otherwise in the cloud, making it easier for continuity from year to year as well as making it easier to recover if something unforeseen should happen partway through the SRA process. My favorite commercial solution is the one from HIPAA One, which is kind of like TurboTax for the SRA.

For those of you in the value-based care trenches, the Core Quality Measures Collaborative has released four updated core measure sets. The updates are the product of collaboration among more than 70 members of the group. The impacted sets include pediatrics; obstetrics / gynecology; gastroenterology; and HIV / hepatitis C. Core measure sets are used to help align various payer and governmental programs, which theoretically should help healthcare delivery organizations meet goals consistently and not have to do different data gathering and manipulation for similar but subtly different measure sets. An additional four core measure sets will be updated in the coming months, including medical oncology; orthopedics; cardiology; and one addressing primary care / patient-centered medical homes / accountable care organizations. There are also plans to release two new core measure sets covering behavioral health and neurology.

A recent Viewpoint piece in the Journal of the American Medical Association looks at the idea of “Algorithmic Stewardship” for artificial intelligence and machine learning technologies. At least 50 AI/ML algorithms have been reviewed by the US Food and Drug Administration and have received approval for various medical use cases. They can also be used to predict patient behavior or identify risks for increased morbidity and/or mortality. The authors propose that in addition to the FDA’s oversight process, health systems should also “develop oversight frameworks to ensure that algorithms are used safely, effectively, and fairly.”

The stewards would be charged with ensuring predictive algorithms are used fairly and should receive input from informaticists, patients, bioethicists, scientists, and safety / regulatory personnel. They would also be tasked with monitoring the ongoing clinical use and performance of predictive algorithms. I’d be curious to hear which organizations at the forefront of AI and machine learning have begun to incorporate such a stewardship model.

I’ve seen more than my share of poorly-maintained patient problem lists over the years. One of the goals of electronic health records was that problem lists would be more accurate and complete, and we just haven’t arrived yet. An article published in the Journal of the American Medical Informatics Association this summer looks further at “Characterizing outpatient problem list completeness and duplications in the electronic health record.” The authors looked at records from Partners HealthCare and identified patients with eight common chronic diseases, then reviewed those problem lists. They found a wide variation in levels of completeness as well as levels of duplications. Better completeness seemed to correlate with disease severity. The authors conclude that “further studies are needed to investigate the effect of individual user behaviors and organizational policies on problem list utilization, which will aid the development of interventions that improve the utility of problem lists.”

My very first EHR consulting project, somewhere in the early 2000s, revolved around a problem list. The organization had initially deployed EHR only to primary care physicians, and when subspecialists were brought on board, some of them “cleaned up” patient problem lists by removing entries that they felt were “primary care stuff” that cluttered up their idea of the problem list. Due to poor training (or lack of listening), they didn’t understand the concept of a shared problem list. I had the pleasure of going through thousands of charts and trying to rectify the mess, returning those pesky primary care problems to life. Nearly two decades later, the issues I see are still rooted in governance (or lack thereof). We should know better by now, folks.

clip_image004

I spent some quality time with a new optometrist this week and was blown away by the new contact lenses she suggested. Fortunately, the dramatic change in my vision was due to being a year older rather than anything COVID-related, which made me happy. I was not, however, blown away by the text I received later in the day pre-booking me for an appointment next year, at an inconvenient time on an inconvenient day. There was no way to respond or reschedule via text, which forced me to call, hold within the office phone tree for more than five minutes, than reschedule. This is a perfect example of a good idea that was poorly executed. I know the importance of patient retention and continuity and would have been happy to schedule an annual follow-up before I left, but their approach was inconvenient. I wonder how many patients just no-show the following year?

I also had a dental checkup, and while I was impressed with their in-office screening protocols, I was not impressed by their phone screener. When I truthfully answered “yes” to the “have you had contact in the last 14 days with anyone who has COVID” and noted that I’m a physician and have been wearing personal protective equipment during the contacts, he somehow assumed that I had tested positive for COVID in the past. I was recently flagged in Epic by another physician office as a “high risk contact” and it took a lot of explaining to get it handled. There really needs to be an accommodation for healthcare workers who have positive contacts but are wearing PPE. It’s no fun having your friends treat you like you’re Typhoid Mary, and other healthcare institutions should have a better understanding of and appreciation for our collective efforts.

Have you been denied service or treated differently during the pandemic because you work in healthcare? Leave a comment or email me.

Email Dr. Jayne.



HIStalk Featured Sponsors

     

Currently there are "2 comments" on this Article:

  1. I hate the automatically scheduled appointment like that too. Although, I’m more intrigued how you knew that an appointment a year from now was at an inconvenient time and day. I’m not sure what I’m doing next week 🙂







Text Ads


RECENT COMMENTS

  1. Minor - really minor - correction about the joint DoD-VA roll out of Oracle Health EHR technology last month at…

  2. RE: Change HC/RansomHub, now that the data is for sale, what is the federal govt. or DOD doing to protect…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.