Submit your article of up to 500 words in length, subject to editing for clarity and brevity (please note: I run only original articles that have not appeared on any Web site or in any publication and I can’t use anything that looks like a commercial pitch). I’ll use a phony name for you unless you tell me otherwise. Thanks for sharing!
By Skip Tumalu
You’re asking the right questions about X12 and clearinghouses. The answers, as is sometimes the case with EDI issues, may lie beneath the surface. And bravo for insisting on transparency. But do take the time to investigate, measure and test. Do not let the inability of your business partners to approach transparency trap you into a corner with no exit. Let’s take a quick snapshot of the surface issues and “what lies beneath.”
Eligibility on the surface
Provider transmits data elements per payer requirements. Payer responds with Eligible or Not.
The more non-required data elements the provider transmits, the more likely the payer will falsely respond Not Eligible. Why? There was a keystroke or other error in one of the data elements. The data set did not match. Not Eligible. Or, the payer eligibility system is old, cranky, or attempting to comply with governmental program rules and “just says no.” Don’t worry — false negatives on eligibility are usually less than 15%. Remember this when we discuss how much you might invest to get YOUR own revenue cycle to Six Sigma, as measured internally.
Claims on the surface
Provider transmits complete claim and “scrubs” the data elements or pays for “scrubbing.” Payer accepts and pays some claims, but a double-digit percentage are rejected or pended. Providers don’t like to reveal their percentage of claims that are sent in a second time. Want to see real discomfort? Just ask about the percentage that must be sent in a third time. It is not always a single-digit number!
Payers have massive legacy rules tables for claim editing/adjudication. A payer might say they have XX thousand rules they apply to filter and route claims in their processing silo. If you run enough claims and keep track of results, it is not hard to show that the payer is wrong. It is not XX thousand rules, but perhaps 150% of XX thousand rules.
How can this be? Payers edit their adjudication tables. They do it frequently. The process may be less than Six Sigma, folks! Over time, they have a table full of best efforts — not a Six Sigma system. And no, you won’t get any payer to agree that this is remotely possible. This also means that if you measure diligently, your payers won’t be very responsive to this issue. Why? Can you imagine the consequences if they stepped up to solving this one?
I met with a provider who admitted to having most claims submitted more than once and many claims submitted a third time before payment. I said, “Gadzooks – your Days Revenue Outstanding must be really large.” They said, “No, it is less than two days.” I asked how that could be. The response was, “We deposit payer reimbursement checks the same or next day — we have about 1.5 Days Revenue Outstanding.” I said that we need to count from the day the claim dropped and had my head handed to me — I was yet another false expert with no understanding of how the revenue cycle really works. This type of “unintended conspiracy” of weak partner systems and small misunderstandings can indeed cause some major pain!
It is interesting to note that with big ERP installations down, the large systems integrators are selling a lot of engagements for “total Six Sigma healthcare revenue cycle reengineering.” I’ve chatted with some nice folks about the view above regarding eligibility and claims, the surface view, and the underside view. They’ve said, “So what, our re-engineering is underway and we are not Six Sigma yet.” I’ve asked the about the cost and payback for total re-engineering and heard of many projects investing more than $10 million with paybacks greater than one year.
Cheeky bloke that I am, I’ve asked what the “process quality” of eligibility and claims might be, based on local estimates of the “surface” and “underside” issues mentioned above. Folks will readily agree that process quality on eligibility may be 80% on a good day and claims process quality may be 60% on a good day. I then ask what happens when the middle of the 80% and 60% goes to Six Sigma. The response is, “Please don’t mention this to anyone — it was an important investment that we were counseled we had to make urgently.”
If you’re still doubtful, there is a test you can perform to understand “aggregate process performance” — not of your provider systems, but your total environment. Got Self-Pay? Got Unpaid Self-Pay? Sending any Unpaid Self-Pay to Early Collections? Screen your output file heading to Early Collections a day in advance — ONLY if you’re prepared to see 5% or more of the accounts with valid current eligibility that will pay the claim! If you get 7.5%, 10%, or more, be prepared to call it “an anomaly” and do re-testing over an extended timeframe.
You can do your own math on the implications this has for what payer eligibility responses and payer claims adjudication are doing to YOUR revenue cycle, regardless of your standalone process quality. Besides, don’t you think there might be a compliance issue you’d rather avoid in heading towards collections with folks covered by Medicare, Medicaid, or a commercial payer where you’re in-network? If you don’t have resources to do this screening, then it might be worth paying to get it done. And remember, this is hardly your fault. Even if your “process pipes” are Six Sigma, if you’ve got “gray water” in the eligibility data incoming and “gray water” in the claims back from payers, you are simply using a pristine Six Sigma solution to “pump gray water.” At least don’t promise that the new Six Sigma system will reach process levels that your business partners don’t support and have no capability of reaching. Prepare to measure and report the “grayness” of your business partners’ water.
What are the implications of these possibilities? (I don’t expect them to be real for you until you check it out in your environment with your own payer mix, systems and data results)
- Ignore processing charges at first. Instead, focus on process performance. If you’ve gotta pay to get process quality end-to-end, pay for performance before you get trapped chasing “false economy.”
- Expect weak results on eligibility and focus on making it as easy as possible for staff to check eligibility when and where it makes sense. Unless it is absolutely EASY, your results will only be worse than the typical “gray water result.”
- Expect >> 90% of claims to be accepted and paid as submitted, first time in. Impossible? Ask around. Quality solutions are not free and they are out there. Don’t settle for “we send the claims on as quickly as we can” or “we check each data bucket, for sure.” Use process metrics and announce that your headed for excellence. You’ll be surprised to see the world change around you. And yes, you may need to pay some small fees. Those are small compared to the cost of carrying one or two months of needless Days Revenue Outstanding at a time when banks and revenue bonds are “not behaving normally.” Your Treasurer can provide updates on that issue. Only ask if you have time to listen to a true tale of woe.
The Value of Clearinghouses
By Jim Denny
In theory, there should be no necessity for transaction or interface fees. The intent of HIPAA was to provide, and ultimately enforce, an interoperability standard. In reality, however, that hasn’t happened. This means that practices and hospitals must force the issue by refusing to do business with vendors that charge these fees. They must instead insist upon free and unlimited access to X12 transactions.
Within this imperfect environment, it’s also wise to recognize the value that clearinghouses bring to the current marketplace — hospitals and medical practices alike — through standardization, efficiency, and leverage.
First of all, if electronic transactions were truly standardized as noted above, today’s typical clearinghouse might indeed be redundant. But the truth of the matter is that different payers transfer files in divergent formats with varying content, supported by a wide range of service levels. Providers are saddled, in other words, with a myriad of technical challenges when it comes to claims and revenue cycle management. Advanced clearinghouses serve as an “EDI translator” that can streamline submissions, provide meaningful visibility into claims status and adjudication, and reduce days in A/R.
Secondly, clearinghouses give providers efficiency (and economies of scale) they otherwise would not have. Let’s say that all providers across the country unerringly run into problems submitting one type of claim with one specific payer. To make adjustments, each provider would have to modify its own system. A clearinghouse, however, could update its edits engine or change processes for all its clients, relieving them of monitoring and “fixing” payer-specific anomalies. This is particularly true for SaaS-based clearinghouses.
Lastly, clearinghouses provide operational leverage. Consider data warehousing and the business or clinical intelligence it can supply to providers. If information is locked in a payer-biased clearinghouse, providers will be unable to extract, aggregate and analyze data in ways that are meaningful — much less beneficial — to them. Payer-sponsored data clearinghouses perhaps provide a more cost-efficient option. But we must remember that their objective is to serve payer interests, not provider interests.
Provider-centric clearinghouses, on the other hand, are able to offer provider-focused information that delivers valuable insight about performance, utilization, and outcomes that allows them to track key measures and gain leverage during contract negotiations.
Jim Denny is president, CEO, and director of Navicure of Duluth, GA.
Follow the Yellow Brick Road
By Craig James
Call me the EHR heretic or the guy whose sister the house crushed in the Wizard of Oz. My comments have nothing to do with how hard everyone is working, their professionalism, or their skills. So much for my disclaimer.
You can’t read a blog or Twitter post without tripping over hopeful accolades anticipating some miraculous intervention by one of the standards committees, the RHIOs, the HIEs, or the Meaningful Use or Certification Committees. Example:
State CIOs Get ‘To-Do’ List, HDM Breaking News, August 25, 2009 — The National Association of State Chief Information Officers has published a report giving guidance to CIOs as their states implement health information technology provisions of the HITECH Act within American Recovery and Reinvestment Act.
The act requires state leadership in two primary areas: oversight for the planning and deployment of health information exchanges and management of the Medicaid incentive payments for meaningful use of electronic health records, the report notes.
“The HITECH Act placed a significant amount of new responsibilities on states in regards to state oversight for HIE and the planning and implementation grants for preparing for HIE,” the report states. “During this initial planning period, state CIOs must secure a seat at the table to establish themselves as key stakeholders and also to recognize strengths and identify weaker points that require resolution within their own offices relating to statewide HIT/HIE planning. They must ask themselves what they, with their unique enterprise view, can do to support and contribute to each of these areas.”
Let us remember the mission — accurately and timely delivery of your records from A to B. You are 1,200 miles from home, unconscious, and are rushed to the ER in a clinic in Smallville. EMRs from your oncologist and cardiologist, your CT-Scan, and your nuclear stress test, along with a list of your meds, are in the hands of the nurse practitioner as she awaits the doctor’s arrival.
Now let the grownups apply logic. Hundreds of vendors, an equal number of standards — by definition, an oxymoronic statement — home-made EHRs, outpatient EHRs, EHRs serving as RHIOs, IPA EHRs, IPA RHIOs, real RHIOs, and HIEs. Certification and Meaningful Use — another oxymoron.
Here’s a simple question. Who among us can make a reasoned argument that the current plan will enable everyone to get from A to B in 3-5 years? Right now, we call it interoperability. It’s the fly in the ointment and its degree of difficulty and costs are grossly underestimated. If you believe you can, I would love to see it articulated. I do not think the RHIO / HIE / Certification / Meaningful Use plan will work, not do I think anyone who isn’t making revenues from the current plan can make a reasoned argument. Couple that design with the fact that the vast majority of IT projects that cost more than $10 million will fail.
So what? In six to eight years we will have an open, national, browser-based EHR. Maybe we should spend time figuring out how that will work.
TPD’s Review of Semantic Web Concepts
By The PACS Designer
The Semantic Web is a term that some might find confusing when they hear about it from others. The Semantic Web consists of websites that can converse with each other to provide a more robust web experience. Sir Tim Berners-Lee, an English engineer, computer scientist, and MIT professor is the director of the World Wide Web Consortium (W3C), which oversees the Web’s continued development. He is the inventor of the World Wide Web, which was launched on December 25,1990.
Berners-Lee in 1999 had a vision of what the Semantic Web should be. “I have a dream for the Web in which computers become capable of analyzing all the data on the Web — the content, links, and transactions between people and computers. A Semantic Web, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.”
In order to improve the World Wide Web (WWW) with more semantic capabilities, we need to review the current framework of the web. The World Wide Web is constructed using a Uniform Resource Locator (URL), the generic term for all types of names and addresses that refer to objects on the World Wide Web. A URL is one kind of Uniform Resource Identifier (URI).
Another Web term is Resource Description Framework (RDF), which is intended to provide a simple way to make statements about Web resources such as Web pages and other online resources.
Now, at the end of our first decade of the 2000s, we are set to embark on a move to a more interactive Web experience.
One way to improve the Web experience is to improve the linking capabilities to the various web resource storage locations.
The Universal Data Element Framework (UDEF) provides the foundation for building an enterprise-wide controlled vocabulary. It is a standard way of indexing enterprise information that can produce big cost savings through the linking of Web resources.
One of the early linked solutions available that employs semantic Web attributes is called “Twine.” Twine is a new way for you to collect online content — videos, photos, articles, Web pages, products — and bring it all together by topic, so you can have it in one place and share it with anyone you want. Twine can be called a “mashup for the Web 3.0 era” as we move toward a Web 3.0 world. All we need now is for Tim O’Reilly to say it is officially here!
So for healthcare collaboration, if we combine linked resources in a secure private cloud, we can create a place where decisions can be made to treat patients using a broader base of information sources.
Also, healthcare can really benefit from the move to employ more semantic Web concepts in the years ahead and begin to obtain more knowledge in the war against diseases!