Oracle doesn't need FDA approval. Most EHRs are excluded from the definition of a medical device by the 21st Century…
Morning Headlines 2/16/24
250,000 VA Patients Are at Risk of Receiving Wrong Medication Due to Electronic Health Records Issue
The VA Office of the Inspector General reveals that patient medication histories are not transferring between the Oracle Health EHR used at five VA hospitals and the VistA system used at the VA’s other facilities, putting 250,000 veterans at risk of potential medication errors.
Revolutionizing the Emergency Department: MUSC Health’s new frontier in patient care
MUSC Health (SC) pilots a telehealth triage service at two of its emergency rooms in an effort to help patients get care more quickly, resulting in the rate of patients who leave without being seen dropping to almost zero.
Using AI to automate healthcare claims, RapidClaims launches with $3.1M
Automated medical coding and documentation software startup RapidClaims raises $3.1 million.
Using AI for healthcare claims. What could possibly go wrong?
The movie poster for the 1973 feature film version of Westworld pitched it as “a place where nothing can possibly go worng.” Worng. It looks like a typo, but the savvy reader would have recognized it for what it was: a glitch
I had an orange button with the same verbiage that was handed out when I saw the release. Wish I still had it
Following the VA/Cerner integration and noticing that both (Oracle) and the VA are causing this problem. The VA is not following standard clinical Interoperability practices and the Oracle doesn’t understand the variety of the VA data, nor do they seem to care.
The VA’s FHIR program is FUBAR, and will not work. Why would I say that? The VA is asking its integration experts to work with 2 hands tied behind their back. There are 16 VISNs, they all have different enumerations, dictionaries, coding translations, etc. Mapping those data elements into the FHIR resources without knowing the sources of the data elements, the workflow that created them, the enumeration of the data sources, or the variety of the data will achieve absolute failure. And, the VA is hiding all of these elements from the people they are asking to do the integration.
Oracle, is also failing, you can’t integrate data into a solution if you don’t understand where the data is coming from. And if Oracle isn’t looking for that information then how can you map it into your solution. You can’t, but data landing in a space seems to be sufficient to call the integration good.
Remember, that DUR (Drug Utilization Review) requires a cross matrix of at least 5 elements to figure out if a medication is safe to administer — or if there are concerns that should be warned and addressed. If you don’t accurately integrate/migrate the data into the receiving system the receiving DUR system cannot work.
The VA FHIR integration methodology is accurate, the Oracle integration methodology is asserted and not yet proven — but is consistent with what they are seeing. The DUR functionality is quite well understood and I know that it will fail if you don’t have a good migration.
To expand on this a bit.
The Vista data are unique to Vista, there are 16(?) different VISN (grouped systems) and each is managed by different data managers. Within a VISN, there are individual data managers. So, within the VA health system the data are not consistent. There is a Clinical Data Warehouse that attempts to align the data from various VISNs but now you are mapping, and mapping is always suspect. You certainly wouldn’t use data from a warehouse to perform checks at the point of care system.
The data are also not aligned with the industry. Where a Cerner, Allscripts, Epic, et al would use a code set (or its equivilent) SNOWMED, RXNorm, LOINC, etc, and indeed are required to use these code systems — the VA does not. There was an attempt to map their lab data to LOINC but it was incomplete, incorrect, and inconsistent. The medication domain is also problematic and unmapped to foundational code sets. Not to mention that the allergy/intolerance code set does not use RxNorm for medication items.
Drug Utilization Reviews (DUR) include several different cross checks; Prior Adverse Reactions, Drug Drug contraindication, Drug Condition contraindication, Duplicate Therapy, and more. The DUR systems are trained by each EHR system to use the encoding system of that EHR. This is a problem for interoperability in general but at least the Certified EHRs are consistent(ish) in using similar code sets. I can have data from a system entered into that EHR, then add a medication that should trigger a warning, generally it will. If I bring data in from another system, the results are less consistent. One system I know of will drop anaphylaxis under certain condition for an imported allergy to penicillin — obviously not good.
Now, the Oracle/Cerner DUR is supposed to use data translated from the VISTA systems to process medications for a specific patients — given that the translations are ‘problematic’, incomplete, inaccurate, and incorrect — there is little chance of accurately performing the critical patient safety function. An equivalent here would be to use a propeller system to drive a car.
This is most likely why Oracle declared they would “rebuild” the pharmacy function — there is no reuse of that “rebuild” for Certified EHRs — so you should question the wisdom of rebuilding the engine while driving down the highway.
The VA does not understand interoperability, they don’t seem to participate in HL7 definitions, they haven’t inserted their specific needs into CCDA, CCDAV3, or FHIR, so there is no hope that suddenly you will be able to interoperate Certified EHRs with the VISTA system. The VA has a FHIR API, but frankly:
–They didn’t train the people developing the API on the intricacies of the VISTA system
–They didn’t provide the source of the data that they were asked to map into the API
–They are not using a System of System development/validation model for this migration
–They didn’t provide access to the VISTA system to allow for the BAs to conduct assumptive workflow testing
–They didn’t provide the analysis of the variability of the data within a particular field, or difference in usage of that field between VISTA VISNs.
–Even if they did finally get the data translated correctly — they would have done that for exactly ONE VISN — and would need to rinse and repeat for the next 15 VISNs
For these reasons and more, there is no chance that Interoperability will function within the VA. And, because of that data failure (and more) the use of a normalized Certified solution using translated data will not work.
I am not blaming the VA, not blaming Oracle. I am just stating that given current Oracle/VA methodologies there is zero opportunity for success. Unless of course you redefine success to be something complete different than the current definition of success
Going out on a limb here.
Wouldn’t Oracle’s (apparent) interoperability strategy, have a better chance of success, than the VA’s? I mean, it sounds like the VA isn’t really interoperable now. Maybe they can share data within the VA, but outside it? I don’t see a pathway.
In my experience, as long as the standards systems are up to the job? You get a lot farther by conforming to those. Asking other entities, to “take seriously” interoperating with the VA, is never going to work. The VA needs to get serious and support ICD, SNOMED, LOINC, and all the rest. The VA right now, in the form of VISTA, is an island with an eccentric set of data standards.
Yeah, it will probably be a painful changeover. The VA has decades of history and investment in “their way”. But the days of proprietary standards in healthcare data interchange are over. What exactly is wrong with C-CDA, for example, that would mean the VA would have to invent their own system? Why doesn’t the VA support HL7?
There was a time when the standards organizations weren’t up to the job. There was a time when the standards themselves, did not exist. Those days are basically over.
To me, this is all part of the reason why VISTA is a system of the past and not a system of the future. It’s more than just the technology, it’s about the mindset.
Unfortunately,
I can’t disagree with anything you wrote. It is important that they get this right for so many reasons, but this method is not working — and I don’t see how it can work.
Certainly, the safety mechanisms built into both EHRs (VISTA and Cerner) will have great difficulty working together. This has already been demonstrated by previous reports, and I can clearly see how this would happen. The casualties are numerous and varied, but they start to stack into a mountain of failure.
Part of my attitude relates to an experience I had. And this was within a single HIS.
I wanted to report on the Patient Diagnosis. There was a Admitting Diagnosis Field, that I really tried to use! Over and over again, I tried multiple different approaches (it was an optional, character Field. Anything that fit could be entered).
What a disaster! 90-95% of the time, there was no data at all. Even when there was data, it was so inconsistent and unreliable, you could do nothing with it. Selecting or sorting was a complete waste of time.
Eventually I changed my reporting target. There was an ICD Diagnosis Field. Night and Day! Comparisons on the field worked, and worked properly. Sorting on the Diagnosis values made sense, for the first time ever. The field was reliably populated, so it made sense to report on it.
Yes, it’s true that organizational processes ensured the ICD field got populated and the Admitting field did not. But without the structure inherent in the ICD standard, even that would not have been enough. And because it was ICD, I didn’t have to “explain” to any data recipients what the content, structure, architecture, or meaning of those code values were.
If you don’t understand ICD? Here’s a web site that explains it. Don’t understand why those specific codes were used? Go speak to the clinicians and coders involved, I’m just an IT guy. It’s no longer on me to have to support that end of things.
The standards made my life sane and productive.
For what it’s worth, the VA currently releases C-CDA (or HITSP C-32…my memory fails me) via eHealth Exchange and has done so for the better part of a decade.
Agreed,
The VA is using CCDAs today for outbound communication and they started with C32s back in 2012. Looked at them, haven’t tried to bring them into a system to see how they incorporate — but given their lack of HL7 standards I can’t see how they would be processed by a Certified EHR — they would essentially be a fax, and read only.
If someone has experience bringing these records into their system from the VA I would love to stand corrected.
It does look like the VA interoperability system is connected to the SSA — but that system is not an EHR, just a manual read system. There is also talk that they are connected to Kaiser, which would be interesting to see how Kaiser uses those CCDAs.
That was my understanding of C-CDA, and not just in relation to the VA.
C-CDA can output anything on record, even if fields are being misused, irrelevant data is part of the electronic record, necessary data is missing, etc. It can too easily devolve into a record that a human being has a shot at decoding, but a computerized system will fail to interpret. Worse yet, it can devolve into something that no entity, human or computer, can decode. It all depends on the data quality at source.
“Well that’s true of any computerized system”, you might say! Garbage In, Garbage Out.
Yeah, but. If you become solely reliant upon C-CDA for data interchange, you may find that organizational pressure to improve data quality never reaches critical mass. After all, you have C-CDA, data is being shared, and the responsibility for interpreting that data falls upon an external entity.
In the realm of organizational politics and internal priorities, this is not a good combo.