As a patient, satisfaction surveys give one the false impression that someone is reading the comments and changing operations to…
I mentioned last week that I was getting ready for an outpatient procedure, and I’m happy to report it went without a hitch. I was impressed by the professionalism of the surgery center staff as well as their efficiency.
One of the nice touches was a card that was apparently with my patient folder. Each staff member signed the card and indicated the role that they played in the procedure. The card was included in my discharge packet.
I was looking forward to recognizing some of them individually via the patient experience survey that was almost certain to follow. Unfortunately, the link that was texted to me later in the day didn’t work, and the review site’s help functions were of little help, which was disappointing. Knowing that physicians are often graded on patient reviews, I felt bad about not being able to contribute in a positive way.
Mr. H mentioned this JAMA opinion piece last week, which questions whether the focus on patient satisfaction measurements might be harming both patients and physicians. The authors note that “patient satisfaction is an integral element of care, and scholars have argued that positive patient experience represents an important quality dimension not captured in other metrics.” However, they note that many survey instruments were created nearly two decades ago, and “Measures can lose value as they age, and just like the Google search algorithm, patient satisfaction measurement strategies need to be updated to remain useful.”
Unfortunately, many organizations don’t seem too interested in updating their surveys. I’ve experienced this with clients who can’t seem to make updating their surveys a budgetary priority. I’ve also experienced it as a patient, when I was asked how the office performed on aspects that weren’t relevant to the visit. For example, asking about COVID precautions following a telehealth visit, or asking about procedural elements that weren’t part of a given office visit.
My biggest pet peeve about patient experience surveys is when they don’t offer an answer choice for “not applicable,” “did not experience,” or something similar. All clinical encounters don’t contain the same elements, and if you don’t allow me to opt out of a question or respond that it wasn’t applicable, then the data you’re going to get is skewed. When confronted with something they didn’t experience, patients might rate it low, high, or neutral depending on how they interpret the prompt.
Another pet peeve about such surveys is how certain organizations use the data. At one of my previous clinical employers, anything that was less than an overall four-star review generated a “service recovery” call from administration. Since our surveys were constructed in a way that a score of three meant expectations were met, this created a lot of focus on visits that were generally acceptable in the patient’s point of view but didn’t meet the criteria of being exceptional.
In the event that a patient responded with a low score, such as a 2, the immediate assumption by administration was that the physician had done something wrong, even if the low score was a result of the provider giving good care. For example, not providing an unnecessary antibiotic or being unwilling to provide controlled substances without a clear medical need. Administrators always called the patient first, which often led to an accusatory call to the physician, who was on the hot seat to explain the situation.
Having practiced in urgent care and the emergency department for 15 years, I have a pretty good sense of when a patient is dissatisfied with a visit. I make sure to put a lot of detail into the chart note about the visit, what was discussed, the patient’s response to the care plan, and more. It’s easy to read between the lines and see that I already sensed there was going to be a problem and took proactive steps to address it. Still, it felt like our leadership never even looked at the chart and we were always put in a situation where we were on the defensive, which isn’t ideal.
Patient satisfaction surveys aren’t inherently bad. Studies have shown that high satisfaction is associated with lower readmission rates and lower mortality. It should be noted that an association doesn’t mean something is causal, a fact which is often missed by healthcare administrators. The authors also mention a well-known study “The Cost of Satisfaction,” which demonstrated that patients who gave the highest ratings often had higher costs and mortality rates.
One of the specific data elements mentioned in the opinion piece was advanced imaging for acute low back pain. Although such services drive higher costs of care and have little clinical benefit — to the point of being featured on several prominent lists as things that physicians shouldn’t order — they also yield higher mean patient satisfaction scores.
The authors also mention that many of the survey tools in use were designed to measure aggregate performance and weren’t intended to evaluate individual physicians or care teams. They go on to explain that some instruments in standard use result in skewed data, where a physician can score highly but because of the distribution of responses be considered to be in the bottom 50% of performers. When everyone is high performing but some will be penalized regardless, it creates a continuum of responses with complete withdrawal on one end and something akin to “The Hunger Games” on the other.
The piece also notes that small patient populations or small response rates can create a disproportionate impact on a physician. In my past life, when I transitioned from full-time to part-time practice, this became readily apparent as I spent more time working in clinical informatics and less in the primary care office. Patients were also disappointed that I wasn’t as accessible as before and this showed in satisfaction scores, regardless of the quality of care that patients received. It certainly was a contributing factor in my decision to leave primary care and transition to the emergency department, since I didn’t want to spend half of every visit discussing why I was only there one day a week and the fact that patients refused to see my partners.
While the authors note that patient satisfaction scores are an important component of quality, their use in a “high-stakes” environment “renders them at best meaningless and at worst responsible for physician burnout, bad medical care, and the defrauding of health insurers by driving up use.” They call on payers to reconsider their use in determining quality and payment factors. The authors ask the Medicare Payment Advisory Commission to annually evaluate measures currently in use to make sure they are still fit for purpose.
Although I agree, I know that it’s always easier to keep the status quo, so I’m not hopeful for significant changes. There have also been a number of studies looking at elements of bias in patient satisfaction surveys, and how physicians of certain demographics perform less well than others regardless of outcomes. Until those issues are addressed, patient satisfaction scores will continue to be controversial.
What do you think about the incorporation of patient satisfaction scores in the determination of quality bonuses and payments? Is there room for meaningful transformation? Leave a comment or email me.
Email Dr. Jayne.