I was having a pretty pleasant week until one of my group’s more challenging physicians walked into my office with a copy (printed, of course) of an article entitled, “Physicians report losing 48 minutes a day to EHR processing.” Once again, Medical Economics uses an eye-catching headline to remind us why EHRs are evil.
When looking at patient care, my colleagues will sit in Journal Club and rip scholarly articles to shreds, dissecting them and discussing why they do or do not apply to our patient population and care paradigm. They’ll argue about the composition of the study population as well as the methodology. Only when they’re fully convinced as to the integrity of the data and the statistical analyses performed will they agree to add the paper’s recommendations to their clinical protocols.
When there’s disparagement of EHRs to be had, however, they take the article as gospel without a single moment of review and pass it all around the physician lounge. This is the same physician who barged into a meeting last year with a survey of EHR satisfaction, demanding we replace our system. He didn’t both to notice that fewer than 20 respondents use the same EHR as us and are likely not in the same situation.
He took the same approach with this article and wouldn’t listen to anything I had to say, ultimately storming out when I wouldn’t feed into his negative energy. For anyone who does want to listen, however, here is my critical review of the article.
First, the article cites a survey by the American College of Physicians as the source of the data. Key points cited in the Medical Economics article included:
- 89.9 percent reported at least one data management function was slower with EHR
- 63.9 percent reported that note writing took longer
- 33.9 percent said data review took longer
- 32.2 percent said it took longer to read electronic notes
In digging deeper, the survey results were published in a letter in the Journal of the American Medical Association’s Internal Medicine. They weren’t published as part of a peer-reviewed study, which is an important distinction.
In looking at the letter itself, I’m not following the math. They said they sent the survey to 900 ACP members and 102 non-members. That’s 1,002 people by my math. In the next paragraph, they talk about “845 invitees.” Since 485 opened the email, that gives them a contact rate of 62.5 percent. But if you divide by the original 1,002 people to whom the survey was sent, I get 48 percent. Either way, only 411 of the responses were valid.
The survey also found differences in the time “lost” by residents vs. attending physicians differed – 48 minutes vs. 18 minutes, respectively. They suggest “better computing skills and shorter (half-day) clinic assignments” as possible contributing factors. I found the last sentence of the results section particularly interesting: “For the 59.4 percent of all respondents who did lose time, the mean loss was 78 minutes per clinic day.” Pulling out my handy math skills again, that would seem to indicate that 40 percent of respondents did not lose time.
The fact that this data was self-reported makes it less reliable than observer data. Their methodology relies on physicians remembering what their days were like a year ago (or two, or three, depending on when they went live on EHR) and comparing it to the present. I don’t know about you, but my clinical time is significantly harder for a lot of other reasons other than the fact that I’m on an EHR.
I’ve used EHRs for more than a decade and have to say that the Meaningful Use program (with its many required data elements) alone increased the time I spend charting. It wasn’t due to the EHR per se, but due to the required data. It’s kind of like when E&M coding was introduced – notes took longer because the volume of required data increased.
They authors seem to acknowledge this with their statement: “The loss of free time that our respondents reported was large and pervasive and could decrease access or increase costs of care. Policy makers should consider these costs in future EMR mandates.”
I also find it interesting that they didn’t mention results of any questions asking about how many data functions were faster with EHR. From my own experiences (across eight or nine different platforms) there are always areas that work faster and better in EHR and others that were faster on paper. But faster doesn’t equal safer, more reportable, or higher quality – it simply means faster. You can’t look at speed alone as a marker of EHR value, but I’ll take my EHR’s telephone message system over chart pulls and little pink pieces of paper any day.
When our medical group initially went live on ambulatory EHR, we actually did the time and motion studies pre-EHR and at multiple points post-EHR. We had data that showed that the EHR was neutral for time as well as for revenue. It didn’t matter that we had good data, however, because physicians naturally assumed that we “cooked the books” on it to show the EHR in a favorable light. That kind of bias is hard to overcome.
Looking at some of the raw data from our observations, we found the presence of a computer during documentation to be a confounder. Physicians were more likely to access other resources, such as UpToDate, formulary information, or our system’s clinical repository, while reviewing data and documenting. Those resources were simply not available to them in the paper world. It’s hard to separate that kind of computer use from the actual use of the EHR product when you’re considering how long it takes to complete your notes.
I would much rather take a little longer because I spent a few minutes validating something in UpToDate than to simply finish faster. I also spend time in the EHR making sure patients get appropriate personalized education handouts, which I couldn’t do in the paper world. A survey cannot control for these other types of computer usage within the context of the EHR. Because of single sign-on and CCOW, half of my physicians would be unable to tell you where the EHR proper ends and the rest of our data universe begins.
What’s the bottom line? Although this survey has scholarly trappings, if other research was conducted this way, it would have holes like a block of Lorraine Swiss. The fact that review and documentation takes longer may not necessarily be a bad thing.
I’m interested to see what readers thing about the publication of this letter. Have thoughts about it? Or a favorite Swiss of your own? Email me.
Email Dr. Jayne.