Home » Interviews » Currently Reading:

HIStalk Interviews Amy Abernethy, MD, PhD, Chief Medical Officer, Flatiron Health

July 21, 2014 Interviews 3 Comments

Amy Abernethy, MD, PhD is SVP/chief medical officer of Flatiron Health of New York, NY.

image 

You’re going from ivory tower research and patient care to work for a start-up run by a couple of twenty-something Internet millionaires who have no healthcare experience. What do you hope to accomplish at Flatiron Health that you couldn’t do at Duke?

For the last decade or so, I’ve been working under the basic premise that a fundamental challenge in better bridging research and clinical care was the lack of interoperable or real-time data. I’ve been working on this problem from every direction, usually with cancer care and research as my demonstration model. Sometimes my approach was to focus on how to create the data stream. Sometimes I focused on cyber infrastructure. A lot of other times, my focus was from the point of view of, “If you have the data, what would you do with it next?”

In this vein, I thought about the context of clinical use as well as other problems like storing the information for research, quality, etc. in the future. It has been clear over and over again that a key bottleneck to solving the problem has been in creating the right kind of data infrastructure that is large enough and represents a broad enough footprint of the whole population.

About a year ago, I started learning about what Flatiron was trying to do. It’s interesting the words that you described, “Internet millionaires who didn’t know anything about health IT or healthcare.” That’s exactly where I was when I first started talking to them. They would call me and I would give them a hard time on the phone, and then otherwise that was the end of the conversation. But every single time I said to them, “OK, here’s what I think you need to solve and here’s what I think you need to do next,” and then, a month or two later, they would call me back up and they had done it.

Over the course of about six to eight months, they advanced a series of what I thought were critical steps to solving this problem, at least within the cancer space. By March, the convergence of those steps got me to the point where I said to myself, ”If I’m going to truly work on this problem, solving it and taking it to the next level, then I need to not be watching from the ivory tower, but right in the middle of it. I need to help lead it forward.”

 

It sounds as though you’re buying into their premise that oncology needs to be disrupted.

I am absolutely buying into that premise. From the standpoint of being an oncologist, I have sincerely believed that it needed to be disrupted for a very long time. But I feel like I have been playing around with how to disrupt it and have been more nibbling on the edges rather than getting into the center of the story.

As this year has progressed and I’ve been talking more and more to Flatiron, working with important groups like the American Society of Clinical Oncology, laying out a roadmap for learning healthcare, etc. it became clear that solving this problem was part of the major disruption action.

 

Oncology is more patient-centered and longitudinal in treating patients for years. In your TEDMED talk, you talked about using data both from providers and from patients themselves more effectively. How do you see all of that feeding together and what’s the patient’s role in creating this data?

I’m going to take that question into two parts. On one side, I’m going to talk a little about why I think oncology is a unique space, then also talk about what I think the role of patients is.

From a standpoint of oncology being a unique space, in 2009 or so there was a paper in Health Affairs by a guy named Lynn Etheredge that set out the premise that if we’re going to solve the Medicare dilemma — in other words, making Medicare sustainable — we need to attack it from the point of view of oncology. My point is that I’m not the first person to say what I’m about to say, but it has started to crystalize and become clear over the last three to four years.

Oncology is unique because of some of what you said, which is there’s a longitudinality to it. We follow patients very intensely and have very close connections over time. It’s also a space where the science and the clinical care meet.

If you want to solve problems in learning healthcare where the science is as visible and as expected to be a part of the clinical space as the rest of clinical medicine, oncology is a good place to do that. Its a place where the conversation around a patient being involved in clinical trials is a given, not an extra conversation on the side. Then there’s an inherent urgency to cancer care and research; an inherent patient and family centeredness to it.

Then, frankly, it costs a lot of money. The expense of cancer care is going up both because it’s now becoming one of the dominant causes of death, if not the leading cause of death, worldwide. Interventions are getting more and more expensive.

We’ve got this confluence of reasons that make oncology a good use case, a demonstration model. It’s not the only place we’re ultimately going to need to solve this problem, but it’s a good place to start.

The other question that you had was something that I really believe in, which is that patients shouldn’t be a sideline in the story, but need to be central to the story. When we talk about learning health systems, it’s as if the unit of goal optimization is the health system itself. But shouldn’t it be that we’re optimizing healthcare because it’s better off for people and for patients? Instead of optimizing healthcare so that the hospital makes more money or the health system is financially sustainable, let’s focus on better care for patients, with improvement of the health system as a byproduct. That’s a much better model.

I always start off my thinking about how to tackle these problems with the patient at the center of the model. An interesting thing happens when you do that. One of the big issues in learning health systems is data linkage — the ability to take care of populations, the ability to follow people longitudinally over time. When you center the conversation on the patient first, it is much easier to think about how to solve some of those problems.

I have found that by disrupting even our way of thinking about learning health systems so that the patient is the central unit of what we’re thinking about as opposed to the health system being the central unit of what we’re thinking about, we approach solving a lot of problems much differently and smarter.

We’re also in an interesting place where the kind of data sets that we’re going to have in the future aren’t just going to be, for example, electronic health record data or administrative data. It’s going to be data generated by patients, by people, wherever they are.

I started off doing this work in patient-reported outcomes and thinking about how we ask about their symptoms, their quality of life, what is meaningful as it relates to health and healthcare. It turns out that technology enables us to imagine a world where you can ask a patient about symptoms sitting in the clinic waiting room or you can ask about symptoms when the person is sitting in their home in Asheville, North Carolina. You can follow people in between the visits, etc., gathering a much clearer picture of the longitudinal story and implications of different health interventions. 

The land of patient-generated data is getting more and more interesting. The ability to use biometrics and sensors and understand what our world looks minute to minute and day to day from an individual person viewpoint really changes the landscape of how we use big data to solve problems in healthcare. The ability to think of glucose data not just as a data point being generated by the hospital lab, but as glucometer-based data that’s coming from the home.

We’ve been collecting these kinds of data for a long time. The home glucometer is nothing new. Pain became the fifth vital sign in the 1990s. But we haven’t really systematically thought about how this is a part of our national data set in order to solve the problems of learning healthcare. When it comes to patient centricity, it shouldn’t just be a byline, but part of the way you think about designing and developing our systems.

 

When people think of oncology data lately, they’re probably thinking about applying genomic information to treatment decisions or sharing protocols from major cancer treatment centers. How do you see all that fitting together, particular on the genomics side?

The genomics side again is a really nice use case. I don’t think you or I believe that genomics is going to be the only scientific story in the future. There’s going to be a lot of other ones. But if we can start to get our head around how we merge what’s happening within the context of life sciences and basic sciences with clinical annotation of basic science data putting biological discoveries into context of what happens for individual patients, our science will be much better.

Those two pieces need to come together. In order for that to happen, we need to do a lot of things. One is we need the cyber infrastructure that allows that to happen. It’s the combination of bioinformatics as we’ve classically thought about it plus clinic informatics and applied informatics and the emerging combination of these, including dealing with everything from the storage, data quality, and data use issues. Also starting to think about how much information do you really need to store for this particular patient, how do we analyze it, what is the right research to conduct, and what should that look like.

Another example of what we’re going to need to deal with is trying to get our heads around if we did have a cyber infrastructure, how do we thoughtfully manage the security, confidentiality, and privacy issues? If we are bridging between questions in clinical research and healthcare quality, how do we deal with questions of permissions, consent, and human subjects protections? These pieces are starting to crystallize, but we have a long way to go.

The genomics use case also takes us into the clinical applications side. As we start to have more genomics-informed cancer care, for example, how do we help clinicians and patients make snap, very quick, well-informed decisions at point of care so that we’re surfacing in real time the right combination of this person’s genomic profile, coupled with what we know are the right drugs for that particular clinical scenario, and understanding that there are limitations to what’s possible depending on reimbursement scenarios? It needs to be the complete complement of data in order for clinical decision support systems to be truly useful and not annoying. As a very basic example, if we surface genomics plus drug information independent of reimbursement, we’re not doing anybody any good.

Ultimately, solving these problems for genomics and, along those lines, next-generation sequencing, within the context of cancer care, presents us with a great use case that’s going to be replicated multiple times.

 

Oncology is a lightning rod is from a societal perspective. Hospitals that suddenly start treating oncology patients as outpatients because they mark up their visit higher than oncologists in the office, for-profit cancer chains, oncologists paid to administer or incented to administer more expensive drugs, a lot of pharma influence, the pharmacoeconomics of expensive drugs versus what benefit the patient gets. All those are issues interfere with the pure science and medicine of how cancer is treated. Do you see that being something that Flatiron will help resolve?

This is the reason why data is the bridge. All of those problems have as a foundational or fundamental underpinning — the need for discrete, interoperable data that can be reused to address each of those things simultaneously. Whether or not you’re actually trying to get the science smarter or you’re trying to optimize reimbursements, you need essentially the same data points to do so.

One of the reasons that I made the jump from academia to industry is to try and figure this out. Resolving all of these problems means that first you’ve got to deal with the data bottleneck. But at the same time, you need to be doing R&D work, imagining a world when the data bottleneck is solved and answering the question of “and what do I do next.” You have to be ready to work through all of those different, as you said, lightning rod questions, which is going to take a lot of work and practice.

While ultimately the data are substrate and producing the data streams that can be analyzed to solve those different problems is a fundamental underpinning, after that you still need to advance the work in the analytics space, align culture, sand out processes including scientific methods in order to pull all of the pieces together, etc. I have this one talk that I always give on the convergence of personalized medicine, comparative effectiveness research, healthcare quality, healthcare optimization, and patient centricity. If you take all of those, the one common element is interoperable data.

 

Along those lines, along with the announcement of the Google Ventures investment in Flatiron was its acquisition of Altos Solutions and its oncology EMR. Was that done as a way to get quick access to a lot of oncology information without having to do individual integration with the varieties of EMR systems that are used by oncologists and hospitals?

There’s a couple of pieces of an answer here, so I’m going to take it separately. First of all, the way that Flatiron is doing its work is EHR independent. The idea is essentially to extract the data from the back end use a process of technology-enabled chart abstraction and other techniques to make it to a common data model. This dataset can then be integrated with other data feeds like the Social Security Death Index. It doesn’t matter if it’s Varian or iKnowMed or Epic Beacon from an oncology EHR standpoint.

The addition of Altos revved the engine, because at least now there’s one cloud-based oncology EHR that has essentially a single instance and doesn’t require a different setup for every single site. But is really essentially one additional extraction to an overall model. That’s the first point of efficiency.

It also catalyzes or adds a jump to the next level in terms of acceleration of footprint for the number of oncologists and therefore their patients represented in the national footprint for Flatiron. Those two things are important and near-term wins for why Flatiron bought Altos, but now you’re going to hear Amy’s part of the story.

If you take what I just said — and I love the way you said it was a lightning rod – oncology is a lightning rod for all these pieces coming together, not just solving the science and genomics, but it’s the comparative effectiveness research, figuring out how to optimize healthcare, etc. As I mentioned, data is the fundamental substrate, but then you have got to learn what to do with it next.

A lot of that also is clinical decision support for personalized medicine and other interfaces directly in the clinic at the right time with doctors and patients to make healthcare more efficient, patient centered, and of better quality. For example, better allocation of care along predefined evidence-based pathways and monitoring of whether the care provided actually aligns with the evidence. The availability of real-time education.

Altos as a cloud-based EHR will provide Flatiron with a beautiful, national scale living laboratory to try out all the different ways of using and reusing data in the context of what EHR can do for you. It’s a near-term win in terms of data sets and efficiency, but the real big win here is in terms of a national living laboratory where Flatiron and clinical partners can work together to use technology tools to make cancer care better. Now that’s a use case.

 

Other than that acquisition, $130 million is a pretty big investment for a startup. How will that money be used?

A key aspect of the focus of Flatiron for the next two years or so is going to be making sure that the corporate philosophy is well attended. This includes building the tools that are needed, making sure that clinical practices are well served in terms of having their data extracted and getting them meaningful processed data back that’s actionable at point of care, and the scale from the technology development side in order to support key data partners like the life sciences. We need to ensure that this happens efficiently and with the right kind of engineering focus. That’s going to be a big piece of it.

There’s also ongoing work on how we surface this information, optimal data visualization solutions, how to help clinicians and practice administrators understand the information as efficiently as possible, how do we optimally interface with patients. There’s already a current product, OncoAnalytics, that allows practices to see their data in a dashboard format. It’s really good and certainly much better than anything they’ve already got. But how do you really rev that engine up for data users of all types? That’s going to be a place of substantial investment as we think about how we can get more and more information to practices, life science partners, health systems, researchers, professional bodies, etc.

Why is that so important? We need to see all users of the data, doctors and patients and health systems and sponsors, as key constituents. To create a national data set, it needs to be sourced from many, many places and those different contributors need to see value as to why they want to keep participating and contributing. And it needs to be used. Data quality improves when data are used, not hoarded. Servicing those places is a critical focus.

 

Do you have any final thoughts?

One of the things that’s been interesting to me and for me is personally making this jump. I haven’t left academia entirely. I still have a 20 percent footprint at Duke, which I maintain so that I can keep working with clinicians and others on solving the problems that we will be able to solve when the data bottleneck is resolved, on mentoring, on other aspects of R&D.

While it’s clear to me that Flatiron is the right vehicle with the scale and talent needed solve this data bottleneck, it was also important to continue to develop the future talent that will be needed to support the next steps in the vision. That’s where my Duke job comes in. Academia offers a unique place for growing the next generation. We all must keep our eye on the big vision, hammer home hard on the key tasks that have to be sorted out, and prepare for the exciting future.



HIStalk Featured Sponsors

     

Currently there are "3 comments" on this Article:

  1. Fantastic, next-generation thinking that merges patient-centered concepts with EMR. Data is the key and we need to think of it as a malleable resource to be used creatively in future applications. It is gratifying to read this interview — gives me hope. Thank you Dr Abernethy.

  2. “Data quality improves when data are used, not hoarded.” I’m excited to see what Flatiron can do in this space and if the model that they create can be reused to solve problems with other specialties.

  3. Great interview. I first met Amy last year. From my conversations, it is clear that Amy is a thoughtful, pragmatic oncologist who is interested in real change. I applaud her taking the leap to join Nat and Zach at Flatiron. Oncology (and, frankly, all medicine) should be viewed as a science based on informatics. Today, we have 10,000 very smart oncologists all working in silos with no interest or ability to share data. If Flatiron can aggregate a large portion on the US oncology data and create a national oncology database with outcomes data, this will be HUGE.

    Ten years ago our team at IntrinsiQ tried doing what Flatiron is attempting. We succeeded only at a very small scale. Sadly, the biggest obstacle IntrinsiQ faced (and Flatiron will face) was getting the Academic Medical Centers (AMCs) to cooperate. Given Amy’s credentials, maybe she can figure out how to get Duke and the other AMCs as collaborating partners – a tall task, but if anyone can do this, Amy can.

    Finally, I have a strong belief that today’s problem of doctors not sharing data is not a technology problem. The hardware and software exist and can be implemented relatively easily to make this happen. We do not have a “technology problem”. We have a “business model problem”. Flatiron needs to think of the business model that will encourage health systems to share data. Doctors will only share data if it is financial beneficial for them to do so. Once Flatiron figures this out, they will be extremely successful.

Text Ads


RECENT COMMENTS

  1. Thanks, appreciate these insights. I've been contemplating VA's Oracle / Cerner implementation and wondered if implementing the same systems across…

  2. This is speculation, but it's informed speculation. There are trouble spots to look out for that are likely involved: 1).…

  3. "HHS OIG rates HHS’s information security program as “not effective” in its annual review, the same rating it gave HHS…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.