Home » Dr. Jayne » Currently Reading:

Curbside Consult with Dr. Jayne 11/28/22

November 28, 2022 Dr. Jayne 4 Comments

I mentioned last week that I was getting ready for an outpatient procedure, and I’m happy to report it went without a hitch. I was impressed by the professionalism of the surgery center staff as well as their efficiency.

One of the nice touches was a card that was apparently with my patient folder. Each staff member signed the card and indicated the role that they played in the procedure. The card was included in my discharge packet.

I was looking forward to recognizing some of them individually via the patient experience survey that was almost certain to follow. Unfortunately, the link that was texted to me later in the day didn’t work, and the review site’s help functions were of little help, which was disappointing. Knowing that physicians are often graded on patient reviews, I felt bad about not being able to contribute in a positive way.

Mr. H mentioned this JAMA opinion piece last week, which questions whether the focus on patient satisfaction measurements might be harming both patients and physicians. The authors note that “patient satisfaction is an integral element of care, and scholars have argued that positive patient experience represents an important quality dimension not captured in other metrics.” However, they note that many survey instruments were created nearly two decades ago, and “Measures can lose value as they age, and just like the Google search algorithm, patient satisfaction measurement strategies need to be updated to remain useful.”

Unfortunately, many organizations don’t seem too interested in updating their surveys. I’ve experienced this with clients who can’t seem to make updating their surveys a budgetary priority. I’ve also experienced it as a patient, when I was asked how the office performed on aspects that weren’t relevant to the visit. For example, asking about COVID precautions following a telehealth visit, or asking about procedural elements that weren’t part of a given office visit.

My biggest pet peeve about patient experience surveys is when they don’t offer an answer choice for “not applicable,” “did not experience,” or something similar. All clinical encounters don’t contain the same elements, and if you don’t allow me to opt out of a question or respond that it wasn’t applicable, then the data you’re going to get is skewed. When confronted with something they didn’t experience, patients might rate it low, high, or neutral depending on how they interpret the prompt.

Another pet peeve about such surveys is how certain organizations use the data. At one of my previous clinical employers, anything that was less than an overall four-star review generated a “service recovery” call from administration. Since our surveys were constructed in a way that a score of three meant expectations were met, this created a lot of focus on visits that were generally acceptable in the patient’s point of view but didn’t meet the criteria of being exceptional.

In the event that a patient responded with a low score, such as a 2, the immediate assumption by administration was that the physician had done something wrong, even if the low score was a result of the provider giving good care. For example, not providing an unnecessary antibiotic or being unwilling to provide controlled substances without a clear medical need. Administrators always called the patient first, which often led to an accusatory call to the physician, who was on the hot seat to explain the situation.

Having practiced in urgent care and the emergency department for 15 years, I have a pretty good sense of when a patient is dissatisfied with a visit. I make sure to put a lot of detail into the chart note about the visit, what was discussed, the patient’s response to the care plan, and more. It’s easy to read between the lines and see that I already sensed there was going to be a problem and took proactive steps to address it. Still, it felt like our leadership never even looked at the chart and we were always put in a situation where we were on the defensive, which isn’t ideal.

Patient satisfaction surveys aren’t inherently bad. Studies have shown that high satisfaction is associated with lower readmission rates and lower mortality. It should be noted that an association doesn’t mean something is causal, a fact which is often missed by healthcare administrators. The authors also mention a well-known study “The Cost of Satisfaction,” which demonstrated that patients who gave the highest ratings often had higher costs and mortality rates.

One of the specific data elements mentioned in the opinion piece was advanced imaging for acute low back pain. Although such services drive higher costs of care and have little clinical benefit  — to the point of being featured on several prominent lists as things that physicians shouldn’t order — they also yield higher mean patient satisfaction scores.

The authors also mention that many of the survey tools in use were designed to measure aggregate performance and weren’t intended to evaluate individual physicians or care teams. They go on to explain that some instruments in standard use result in skewed data, where a physician can score highly but because of the distribution of responses be considered to be in the bottom 50% of performers. When everyone is high performing but some will be penalized regardless, it creates a continuum of responses with complete withdrawal on one end and something akin to “The Hunger Games” on the other.

The piece also notes that small patient populations or small response rates can create a disproportionate impact on a physician. In my past life, when I transitioned from full-time to part-time practice, this became readily apparent as I spent more time working in clinical informatics and less in the primary care office. Patients were also disappointed that I wasn’t as accessible as before and this showed in satisfaction scores, regardless of the quality of care that patients received. It certainly was a contributing factor in my decision to leave primary care and transition to the emergency department, since I didn’t want to spend half of every visit discussing why I was only there one day a week and the fact that patients refused to see my partners.

While the authors note that patient satisfaction scores are an important component of quality, their use in a “high-stakes” environment “renders them at best meaningless and at worst responsible for physician burnout, bad medical care, and the defrauding of health insurers by driving up use.” They call on payers to reconsider their use in determining quality and payment factors. The authors ask the Medicare Payment Advisory Commission to annually evaluate measures currently in use to make sure they are still fit for purpose.

Although I agree, I know that it’s always easier to keep the status quo, so I’m not hopeful for significant changes. There have also been a number of studies looking at elements of bias in patient satisfaction surveys, and how physicians of certain demographics perform less well than others regardless of outcomes. Until those issues are addressed, patient satisfaction scores will continue to be controversial.

What do you think about the incorporation of patient satisfaction scores in the determination of quality bonuses and payments? Is there room for meaningful transformation? Leave a comment or email me.

Email Dr. Jayne.



HIStalk Featured Sponsors

     

Currently there are "4 comments" on this Article:

  1. Industry will have to rethink surveys designed for aggregate scoring if they’re intended for individual compensation reviews.

  2. Unfortunately, the great care you provide may be overlooked after an 8-12 hour wait in an understaffed emergency room. It certainly is not fair for you to be penalized for this situation, yet the patient needs to speak out about the overall situation, and it is not surprising that Administration might try to deflect responsibility onto the physician. Sorry.

  3. The discussion of satisfaction surveys and job rewards parallels the issues in academic settings where there is debate about the extent to which student ratings should drive faculty incentives or promotions. The most vituperative evaluations that I’ve seen have actually come, not from patients, but from students. There are definitely patients who are distressed about the care they received but there’s usually an understandable reason they’re upset (at least that was the case pre-ivermectin era). The response rates in the student evaluations tend to be reasonable so if you eliminate outlier data points and ignore the most abusive comments, you can still get potentially useful input. However the response rates in the patient’s satisfaction surveys are usually too low to be meaningful. In addition, there are multiple events during patient’s experiences that cause positive or negative halo effects, particularly during a hospital stay or lengthy emergency visit. It may also be more challenging to develop general survey questions that will be applicable to most clinical situations. Our education related questionnaires for students typically include general questions to allow comparisons across courses and questions that are specific to each course. These may be less optimal from a survey design standpoint but are aimed at giving instructors input that will be specific enough to be useful. Outside of clinical research, procedure or treatment specific satisfaction surveys seem rarer in clinical contexts, though potentially more helpful. Regardless, in clinical and educational realms the reliability, replicability, and validity of the data seem insufficient to inform bonuses, performance ratings, or promotion criteria for individual providers/faculty. Instead, using such data to develop rewards seems destined to promote unintended consequences ranging from grade inflation in students or inappropriate ordering of tests or medication in patients.

  4. As a patient, satisfaction surveys give one the false impression that someone is reading the comments and changing operations to fix the issue. When that doesn’t happen patients stop filling our surveys. When the survey is positive there is also no response. My doctors were excellent, but operations were problematic so using that to judge a doctor is wrong.







Text Ads


RECENT COMMENTS

  1. 'Samantha Brown points out that, “Healthcare, like every other industry, gets caught up in the idolatry of the ‘innovators.’”' I…

  2. Minor - really minor - correction about the joint DoD-VA roll out of Oracle Health EHR technology last month at…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.