Home » Dr. Jayne » Currently Reading:

Curbside Consult with Dr. Jayne 7/15/24

July 15, 2024 Dr. Jayne 2 Comments

clip_image002 

I was talking with some clinical informatics folks this week about how we try to keep up on industry happenings. Most of us read a combination of different newsletters and of course HIStalk. Newsletters can be challenging, though, since many of them are either pay-to-play or heavily influenced by submissions from public relations folks. It takes time to learn to read between the lines as far as what the purpose of a given “news item,” might be and it takes experience to try to understand how helpful the given solution or technology might be to a given organization.

A recent write-up in Becker’s Health IT mentioned an Epic app called AutoDx that was created by UChicago Medicine. AutoDx stands for “automated diagnosis,” and according to the write-up, the app identifies patient-specific diagnoses and risk factors and automatically adds them to the visit note template.

The system’s CMIO was interviewed for the article and said that “providers have the option to delete them if they disagree,” but my initial reaction to the tool is that it’s a lot like copy and paste, where there is a fair likelihood that users will just leave these items in the note whether or not they addressed them. The CMIO goes on to say that the risk factors brought forward by the tool “are crucial for coding and billing, external rankings, quality reporting, and other statistics that many institutions, including ours, care about.”

That statement certainly gives some insight as to why the tool was created. Patient care wasn’t even on that list, nor was any mention made of helping physicians better document the care they’re already giving. In my book, those two reasons should be at the top of the list, not compliance with regulatory requirements or trying to play the billing and coding game.

In the past, physicians — especially those in primary care specialties — were known to document fewer problems than they actually managed on a given visit. I think the number was something along the lines of managing five or six issues per visit, but only documenting 3.5. The arrival of the EHR was touted as a way to fix that problem and allow physicians to actually code and bill for the work they were already doing, which makes sense.

Unfortunately, everyone started playing the same game, and the perceived “upcoding” didn’t have as much value as initially thought because payer pressures led to downward rate adjustments, putting people back at square one (or square negative if we’re talking about Medicare reimbursement rates). We’ve seen plenty of examples where organizations are working hard to elevate the documented complexity of the patients for which they are caring so that they can get more money. I recently saw an organization recruiting for unsuspecting physician “chart reviewers” who were expected to review charts and document conditions that the patient may or may not actually have, but which might have been mentioned at some point in time in a patient’s chart.

I dug a little deeper on this particular solution, noting that the creators of the tool had published a paper recently in Applied Clinical Informatics. The paper positions the tool as an alternative to the coding queries that providers often receive, where certified professional coders and others review patient charts and ask if providers can document additional factors in the patients’ charts. These queries happen after the fact and create a disjointed workflow where physicians and providers are asked to update notes sometimes weeks after the visit.

The tool was initially developed to address three diagnoses, including electrolyte deficiencies, obesity, and malnutrition in hospitalized patients. It was piloted by hospitalists and then expanded to the neuro intensive care unit after more diagnoses were added, at least according to the Becker’s article. When I pulled the actual paper, a section header mentions the neonatal intensive care unit, which is a drastically different environment than a neuro ICU. I guess good editors are hard to find.

The pilot showed a 57% decrease in coding queries around the targeted diagnoses compared to a 6% decrease across other high-volume conditions. The authors also noted an increase in the case mix index, which is a marker of complexity and severity of cases within a hospital.

Theoretically, not only should the tool fix the disjointed workflow, it should prompt providers to address conditions at the point of care that they might not otherwise have addressed. Hospitalized patients are often complicated, and hospitalists are expected to manage ever-growing patient rosters. The initial release of the tool created message alerts in the patient note that prompted the provider to select a diagnosis and required that all alerts be addressed before the note could be completed. That certainly sounds a lot more patient-focused than talking about how much it impacts billing and metrics.

Interestingly, the pilot began in mid-February 2020, right before COVID-19 was about to rock all of our worlds. Post-implementation data was gathered for the full month of March of that year and compared to the full month of January as the pre-implementation baseline. The expansion to the NICU didn’t occur until May 2022. The paper has multiple mentions for neuro and neonatal, although I suspect it is supposed to be the former based on certain context elements such as the list of included diagnoses and mentions of things like “patients transferred from other services” that doesn’t necessarily apply to the neonatal ICU, which is usually where critically ill neonates start their hospital stays and remain until they can move to a lower level of care.

Overall, it sounds like the tool can positively impact patient care and reduce burdensome post-encounter queries that are sent to clinicians. Alternatively, it could be a way to enable “autopilot” behaviors where clinicians are acknowledging and adding things to visit notes without thoughtful consideration. I would have liked to see post-intervention surveys to the users about how the intervention impacted care. For example, did it truly identify things that they were addressing but not documenting, or did it provide a safety check to make sure that they were addressing conditions that they may have overlooked? Those are the kinds of benefits that can really drive patient outcomes. I would encourage those who are creating tools like this to include that kind of data gathering and analysis in their research.

I’d love to hear from Chicago readers who may have personal knowledge of the tool or its implementation, or from readers in other places who have used similar tools. What other feedback did you get from clinicians and from coding staff? Leave a comment or email me.

Email Dr. Jayne.



HIStalk Featured Sponsors

     

Currently there are "2 comments" on this Article:

  1. Not about the app but your first statement on “how we try to keep up on industry happenings.” Wondering if we could convince AMIA to create a living systematic review using machine learning for members, like SCCM (and others) did on COVID.

  2. Patient care wasn’t on the list???

    Well maybe not specifically but as a former CFO I can easily add a few – assuming the app is accurate and not just ‘up-coding’:

    1 – It should reduce time clinical staff spend answering documentation /code related questions generated by finance and medical records. The less time they have to spend on justifying diagnosis the more they can spend ‘at the bedside’.

    2 – Improving or maintaining reimbursement – more financial resources can be spent on better medical devices, staff, etc. As a CEO at a Catholic Hospital once said to me many decades ago – “No Margin – No Mission”

    Not everything can or needs to be directly related to patient care, but everything should be at least indirectly.

Text Ads


RECENT COMMENTS

  1. Well that's a bad look as the Senators contemplate filling in the House gaps in the VA Bill

  2. Fear of scorn from Mr HIStalk is so great at Oracle Towers that the webinar recording linked to in the…

  3. House lawmakers should have bought a squirrel ;-)

  4. Poor portal design has lots to blame for messaging issues. In the portals that I have used, the patient can…

  5. Thanks, appreciate these insights. I've been contemplating VA's Oracle / Cerner implementation and wondered if implementing the same systems across…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.