Home » Dr. Jayne » Currently Reading:

Curbside Consult with Dr. Jayne 8/20/18

August 20, 2018 Dr. Jayne 1 Comment

Now that we’re in the bottom half of 2018, CMS has published the 2016 Physician Quality Reporting System (PQRS) Experience Report. The report summaries the reporting experience of eligible professionals and group practices, including historical trending data from 2007 to 2016 covering eligibility, participation, incentives, adjustment, and more. I was curious to get a look at the data because it is broken down both by specialty and by state. Here are some of the highlights:

  • Participation in the program was 69 percent in 2015 and 72 percent in 2016
  • Of the providers eligible in 2016, 31 percent were flagged for a payment adjustment in 2018. This represents over 435,000 providers

Of those receiving a penalty (I’ll call that payment adjustment what it is) almost 85 percent didn’t participate in the program. They literally did not submit any data. That means that 370,000 providers essentially said, “no thank you” and walked away from the program. My practice falls into that cohort, and I don’t think our CEO was that polite in deciding to walk away from PQRS. Other tidbits:

  • Being a provider in a small practice was a marker for receiving the penalty, with 71 percent of “adjustments” being levied on practices with fewer than 25 providers
  • Having a low volume of Medicare patients was associated with the penalty – 69 percent of those providers saw 100 or fewer Medicare patients

Having worked with dozens of practices trying to make sense of the value-based payment scheme, those numbers validate what we already knew, which was that to be successful, you need dedicated resources to help you (which small practices typically don’t) and it’s not worth the effort if the penalty is going to be relatively small due to your patient mix. Of course, 2016 was the last year for PQRS, which transitioned to the Merit-based Incentive Payment System (MIPS) which of course now has transitioned yet again. Since it’s been a couple of years since some of us have handled PQRS data (and many of us have blocked out those painful memories), remember it may use claims data, so it may not match your EHR data if you’re trying to look through the retrospectoscope.

CMS has also put together a document called the Value-Based Payment Modifier Program Experience Report, which looks at program results from 2015 to 2018 and includes the upward, downward, and neutral adjustments. In looking at the section on clinical performance rates, CMS admits that there have been numerous reporting mechanisms over the years and that it created a hierarchy that would be applied if the provider participated through multiple means so that only one performance rate for each provider would appear in the results. It’s a rigid hierarchy, so if a provider performed better through a mechanism that is lower in the list, they would retain the lower performance rate.

The report also notes that there have been numerous changes to the PQRS program over the years, with individual measures being added, removed, and redefined. Additionally, providers who shifted from individual to group reporting may be impacted by data artifact, resulting in the ultimate caveat: “It is unclear the extent to which any observed changes in measure performance were artifacts of the aforementioned changes or trends in provided care.” It goes on in true governmental fashion: “Nonetheless, this section of the report aims to describe clinical performance rates and trends.”

I have to admit, I looked at the report pretty quickly, it’s 96 pages long and there are a lot of tables. I would love to talk to someone knowledgeable to dig into why some of the measures that seem easily attained have declined so much over time. For example, measure 317 is screening for high blood pressure and documented follow-up. It dropped from 91.5 percent in 2013 to 62.9 percent in 2016. There were 4,200 providers reporting that measure across the timeframe, which seems like a reasonable sample. On the other hand, measure 310 for chlamydia screening dropped from 100 percent to 83.3 percent, but only 10 providers were reporting across the timeframe, so a change there could be due to sample size.

On the positive side, cervical cancer screening rose from 41.3 percent to 79.8 percent, but only 103 providers reported that measure. As a primary care provider, I think that’s a sad commentary on the state of preventive care in the US today. The clinical data starts on page 51, if you’re interested in taking a peek.

If you’re not on the clinical or operational side of the house, you may not have seen the decision-making process that practices go through when they try to decide what clinical measures to report. It used to be a little more straightforward, with practices wanting to report the measures where they do the best. Everyone likes to earn an A, so being able to show that you were doing something 95 percent of the time is a feel-good move.

Now that we’ve moved into an “adjustment” phase where there are winners and there are losers and the penalties essentially pay for the bonuses, it’s a different game. Providers are incented to report not on measures where they do the best, but where they do better than the next guy. If you’re doing something 50 percent of the time (which feels like a failing grade) but the rest of the population is only doing it 35 percent of the time, you win! It makes the analysis of measures much more challenging, because providers have to analyze their own performance against the performance of their peers, using a multitude of reports and benchmark data sets.

Smaller organizations may not be savvy enough to figure that out and may end up reporting on the “wrong” measures if they don’t understand how the game is played. I’ve seen a couple of EHR vendors that offer education around this, but the larger vendors seem to think their clients understand it or have enough staff to do that analysis. Even where education is offered, it’s not clear that practices are absorbing the information or that they feel they have the tools needed to make good decisions about quality reporting. Some specialties don’t have options for measures that are truly applicable to them, which puts them in the quandary of choosing measures that don’t make clinical sense just so they can get good numbers.

It might feel easier to just opt out rather than doing something that they know is just “checking the box.” I’ve worked with a couple of clients who have trouble getting the data they need to make good decisions – maybe they don’t have ready access to reporting modules in the EHR, or maybe the reports aren’t run on a frequency that allows the practice to drive change. Usually there is concern about the accuracy of the reports, with organizations having different interpretations of some of the measures than what the EHR might be pulling. That results in an unpleasant back-and-forth with the vendor, where it rarely feels like anyone wins.

I certainly don’t have the answers to this one, but would be interested to hear from readers on how their organizations are coping and whether they’re using any of the recently released data. What do you think of the new CMS reports? Leave a comment or email me.

Email Dr. Jayne.

HIStalk Featured Sponsors


Currently there is "1 comment" on this Article:

  1. Gaming the system is accurate description.It seems silly that provider’s performance score are being calculated by IT vendors.

    Would a cms funded medical council not be in a better place to provide individual measures score to providers?

    The existing medical socities are doing great disservice by reaching out to cost effective IT vendors. Can’t blame medical societies as they saw this as oppirtunity to make some money.

    The IT vendors game the system and with these scores submitted by profit driven IT vendors,Cmms seems to come up with comparative ratings. Hoping some sensible person can establish a true and accurate performance evaluation system. Could not find any mention to rate of medicare spending over the years. Wonder if all this has actually contained the rate of medicare spending.

    Lets hope with time either this current nonsense fades away or matures into a true performance evaluation system.

Text Ads

Recent Comments

  1. You note that, "What they need is the same level of sick leave time that many other workers in the…

  2. Re: AR outsourcing at Epic - Getting billing under control after an implementation was always one of the pain points,…

  3. Wow, your Zoom/Teams/Skype skills must be better than mine! "She noted that she thinks relationships are deeper because there has…

  4. The Ars article is all opinion and fluff, yet curiously you do not indict any of the specific content of…


Founding Sponsors


Platinum Sponsors





















































Gold Sponsors










Sponsor Quick Links