Dr. Herzenstube Goes to AMIA–Monday
Dr. Herzenstube is a practicing family physician who can make nothing of it.
The first session I attended today was a panel on ICD-11 given by representatives of WHO, IHTSDO, and academic organizations involved in developing ICD-11. ICD-11 will be the next version of ICD. The general idea behind it is to harmonize ICD with SNOMED to facilitate the use of SNOMED’s polyhierarchy while retaining ICD’s capability to meet the needs of epidemiologic analysis.
Bedirhan Ustun, a psychiatrist who manages terminology work for the WHO, was the first presenter. He explained that, unlike prior versions of ICD, ICD-11 will have an explicit content model. This means that each ICD-11 code will have underlying definitional modeling (as do SNOMED concepts). The work to build this has been initiated in collaboration with IHTSDO.
Jim Case of NLM and IHTSO came next and explained that the goal of ICD-11 is to link SNOMED CT and ICD so that data can be captured once at the point of care and avoid the need for duplicate coding effort. He explained one important point about ICD, that as classification system, categories are mutually exclusive. This is important to support use cases of epidemiology and statistics, and explains why “other” codes are needed in ICD (something that never really made sense to me until now).
Chris Chute followed with a discussion of the SNOMED-ICD common ontology, which will provide the semantic anchoring of ICD-11. Jim Campbell from University of Nebraska discussed some of the areas where the SNOMED CT and ICD-11 hierarchies were at odds and need to be reconciled and Harold Solberg discussed the process of building the links between ICD and SNOMED, either through equivalence maps (A = B) or cases where ICD is described as a compositional SNOMED statement, and automated testing for potential disconnects in the respective hierarchies.
This panel provided a really helpful degree of clarity on ICD-11 from the people at the very center of building it. It will likely be years before this gets used in the US, but it is good to have a sense of where things may be heading.
I also attended a presentation on the Clinical Quality Framework (CQF), an effort to harmonize standards for clinical decision support with those for quality measurement (nope, they’re not already harmonized; yep, they definitely should have been from the beginning, hindsight is 20-20, etc. etc.)
Dr. Julia Skapic from ONC kicked off the presentation by describing a bit of the regulatory context around clinical quality measurement and clinical decision support and the need to develop a unified way of representing the underlying logic that expresses the standard of care involved. The holy grail to which this work strives is that, if a provider organization configures their system to measure quality using a particular quality measure, they can enable clinical decision support functionality based on the same underlying logic without any additional logic editing work.
Marc Hadley from MITRE described current standards for CQM and CDS and the output of ongoing work under the umbrella of CQF to harmonize them. One such output is Clinical Quality Language”(CQL), which has been issued as an HL7 draft standard for trial use (DSTU). CQL is a Turing-complete, XML-based language designed to be a human-readable way of expressing clinical rules that is also machine-computable and agnostic to data model.
In addition, Quality Improvement and Clinical Knowledge (QUICK) has been developed as a data model for use along with CQL, automatically derived from FHIR Base Resources and FHIR Quality Improvement Core (QICore) profiles. Kensaku Kawamoto described several pilots of using data artifacts based on these standards, which were able to represent rules for things like chlamydia screening and routine immunization. Tom Oniki discussed the Clinical Information Modelling Initiative (CIMI), a community of interest that has become an HL7 working group. While this work is not yet ready for prime time, the amount of progress that has been made is really impressive and the momentum seems substantial. The large lecture hall was filled to capacity, an indication of how vital the need is for a solution to this thorny problem.
The first session of the afternoon I attended was on ACOs, moderated by Gil Kuperman of New York Presbyterian. David Bates of Brighan and Women’s Hospital discussed the use of claims data to identify patients at high risk for hospitalization, who then get an assigned care manager. They have seen a significant reduction in hospitalizations in this population since starting their work.
The most interesting part of his presentation, to me at least, was the use of what he calls Standardized Clinical Assessment And Management Plans (SCAMPs). Basically, SCAMPs consist of a small set of data elements clinicians are asked to document in particular clinical situations. For example, for distal radial fractures, a few details on the fracture type and whether or not the fracture was treated surgically. After a few weeks of data collection, it is shared with the physicians and collection continues.
What he found was that the practice patterns at the start were highly divergent from one physician to another. After sharing the data, the variances all but disappeared without any attempt to coerce or persuade any of them to change their practice patterns. A remarkable example of the Hawthorne effect.
David Dorr from OHSU described the state of Oregon’s experiments with developing approaches to coordinate healthcare for vulnerable populations. His research involves figuring out how to help medical practices perform medical home-related activities such as establishing care management plans, ensuring close follow-up from hospitalizations, and doing clinical quality measurement. While he and his colleagues have developed a population management tool, they have observed something that most practicing clinicians will be familiar with — clinicians need point-of-care reminders, care management workflow tools, etc. within the same system they use to manage other patient information (within the EHR, in other words).
David Kaelber of MetroHealth spoke of some of the real-world challenges of meeting payors’ rules around ACO payments, including the fact that different payors often have slightly different requirements around data collection, population definitions, and quality measurement, requiring duplicate work for what amounts to very similar quality measurements.
David Bates described his work at NYP with the Delivery System Reform Incentive Payment (DSRIP) program, an ACO-like program operated by the New York State Medicaid program. NYP’s programs include everything from patient navigation services in the ED to an HIV chronic care program to a program to deliver palliative care. They did a formal analysis of IT requirements, such as the ability to trigger notifications when key events occur, like a patient being hospitalized or new patient status values in their EHR. Among the lessons learned were that not all of the information flow can be EHR-based since many of the providers they are collaborating with don’t have EHRs.
One of the other highlights of the day was the poster session. The posters were fairly varied, and as is typical for any scientific conference, a bit hit or miss. One that I found amazing was by Matthew Rioth and Jeremy Warner, two physicians at Vanderbilt, titled “Visualizing High Dimensional Clinical and Tumor Genotyping Data.” When understanding data requires looking at it and two dimensions just aren’t enough, innovative data visualization is necessary. While the examples they provided were primarily research-focused, such as generating new hypotheses regarding what genes are important in cancer behavior, some applied directly to clinical practice, like one that showed patterns of ordering of molecular profiling tests across multiple clinics in their organization.
As with earlier days of the conference, the accidental conversations with other attendees were as valuable as the presentations. One memorable such encounter was with Lisa, an epidemiologist working in a reproductive health program at a state health department. She is becoming an informaticist by necessity since to support her research, she needs to figure out how to get more and better data from the clinical practices that her team funds.
To get data to the health department, these clinics currently either complete paper forms (!) or enter data manually through a Web-based portal. A few clinics have set up data entry forms within their EHRs to capture the necessary data, but it still requires duplicate data entry since these forms can’t pull in data from elsewhere in the patient record. So if the patient has been screened for chlamydia, even if that data is in the EHR, it needs to be entered a second time to into the data element that will be sent to the health department.
It was a sobering moment, amidst the promise of future progress all around us at AMIA, to realize how pedestrian the current state is in so many ways. It also drove home to me the ever-increasing burden we’re putting on practicing clinicians to engage in data-entry activities that, while they may serve a noble goal, make it harder and harder to focus on the immediate needs of the patient in front of them.
I hear, and personally experience instances where the insurance company does not understand (or at least can explain to us…