Home » Interviews » Currently Reading:

HIStalk Interviews Eric Rosow, CEO, Diameter Health

January 26, 2022 Interviews No Comments

Eric Rosow, MS is co-founder and CEO of Diameter Health of Farmington, CT.


Tell me about yourself and the company.

I’ve been in healthcare tech for about 30 years. I’m a co-founder of Diameter Health, along with John D’Amore, and I serve as the company’s CEO.

I started my career as a biomedical engineer with Hartford Healthcare. I’ve always been drawn to solving problems that are at the intersection of tech and healthcare delivery. What I especially love is being part of building, and helping to build, mission-driven, high-performance teams. Our mission at Diameter is simply to make data universally accessible, organized, and actionable for better health and more efficient healthcare. 

We have been at this for almost 10 years and we stay focused on this core capability, which we call upcycling. We have been able to process clinical data and patient records for nearly half the country across multiple market segments, including payers, federal and state governments, HIEs, life insurers, and HIT partners. The common thread across all these folks and partners is that they all recognize the challenges and complexities of wrangling multi-source, multi-format raw clinical data that is often dirty, inconsistent, and incomplete.

Has wider use of technology building blocks such as FHIR and APIs exposed the problem of data that falls short in quality, usability, and interoperability?

We are excited about FHIR and the standards that it brings to offer a much more efficient means to exchange data and to pull data. In our early days, we thought of data as digital, but it is like crude oil. It’s in the ground, in tanks, and in trucks. It’s digital, but it’s crude. We look at the market in three broad segments. We need pipes to move and aggregate the data. We need the refinery to clean up and enrich that data. Then we need to address the use cases where you need high octane fuel to run different engines, whether it be a moped or an F-16.

FHIR makes the pipes much larger and puts a lot more pressure behind it, so it is amplifying the need for cleaning up the data. We think that’s a critical challenge that people are seeing now. FHIR is amplifying the understanding of how dirty the data is in terms of incompleteness, duplication, and just plain old dirtiness.

What did you think of the recent study that found that even sites that use the same interoperable EHR can’t necessarily exchange data?

That’s the driver of why this company was founded. I was moved years ago at the HIMSS conference by hearing Google’s Eric Schmidt give a keynote where he talked about how healthcare has this compelling need for a second tier of data. He concluded that these primary data stores of EHRs have to be supplemented, not replaced, with that second tier. He went on to emphasize that in his 40 years being in enterprise software, he has seen this phenomenon repeat itself over and over.

That’s exactly what is happening today in the interoperability landscape, and frankly, what is needed. It’s also super exciting because the second tier of data can unlock massive opportunities for innovation, better workflows, and better outcomes.

To give you a real-world example of a second tier of data, we all use and benefit from apps that use GPS coordinates, such as Uber, Lyft, Waze, and Apple and Google maps. None of those apps would work if GPS locations were inconsistent, because you can only have one set of coordinates for a given location. In healthcare, we literally have hundreds of ways in which diagnoses like CHF, or COVID status, or lab values like HbA1c, even from the same EHR, are inconsistent and are unable to be exchanged. We feel that it is critical to let these innovators and developers focus on innovating and not the dirty work of normalizing data. Once you can do that, then AI and machine learning algorithms work superbly at scale when they can ingest clean data.

How can we improve healthcare when we look at dirty data, when 80% of the allergies are not coded appropriately — and we’ve found in our work that 30% have no code at all — 70% of lab results don’t use the right vocabulary, and almost half don’t use LOINC? We’ve also found that over 40% of medications don’t have the right coding to run quality measures. That is ubiquitous and why this is such an important field that we are so committed to.

What business models are being created or improved with the wider availability of healthcare data?

As I look back at our journey for almost a decade, it has been following the data. We went after the health information exchange market in 2014. Willie Sutton said that he robbed banks because that’s where the money is, and in our case, that’s where the data was. We wanted to go there, not just because they had Epic, Cerner, Meditech, Athenahealth, or Allscripts, but they had over 100 certified EHR vendors. 

Cutting our teeth at that foundational area where all the data is being aggregated has been so valuable. The experience and scar tissue that we developed during those few years allowed us to expand into other markets, including the VA, payers, HIT vendors, and even life insurance, which wasn’t a market we were thinking a lot of before COVID. But it’s an interesting example of how you can have one core capability that crosses multiple markets and therefore multiple use cases and business opportunities.

The early goal was for hospitals to be able to exchange data, but now many players are creating data that should be part of a longitudinal patient record. Is technology adequate for creating that patient record from sources such as pharmacies, urgent care centers, and insurers?

If I go back to my analogy of pipes, refinery, and use cases, our rebranding to what we call upcycling data is where it all comes together. It’s all about powering innovation, efficiency, and better outcomes across the ecosystem, but it fundamentally comes down to the data quality.

I once had the honor of being introduced as a speaker by Micky Tripathi before he took his role at ONC. Knowing how dirty and incomplete clinical data is, Micky introduced me as “the sewage treatment guy.” I laughed, but I took that as a badge of honor, like Mike Rowe in the series “Dirty Jobs” crawling through sewer pipes with rats on his head. Cleaning up this data, upcycling data, can indeed be a dirty job, but it’s so important. It’s not easy, but it’s so necessary to do it at scale. Turning all that potential from the disparate sources into power is to enable these downstream use cases is key.

What level of data exchange is happening between insurers and providers?

COVID has certainly put a highlight on that ability with life insurance, for example. Efficiently accessing and utilizing clinical data coming out of the EHR supports more cost-effective and timely underwriting. Because in a world of COVID, people could not literally go into healthcare settings and pull charts and scan charts. They realize that this is an opportunity. We’ve done some exciting work with Swiss Re, the world’s largest reinsurance company, that sees that not just as a US opportunity and challenge, but a global one. The data interoperability landscape is so exciting right now, but all these technologies are challenged by solving the big opportunities around the data.

But it’s also confusing. A lot of companies are describing capabilities using a lot of the same language. That’s where we wanted to come up with a different way of how to position and explain that. The pipes, as I call them, are going to continue to be more and more commoditized. FHIR will drive more and more ability to access data. The real challenge is in how to make it usable and actionable. That’s why we are excited by this notion of upcycling, because I think it can transform the industry by having that clean, precise, clear data to run these downstream use cases.

Much of the expense of healthcare is administrative, such as in prior authorizations where the clinician’s eyes on the screen and hand on pen or keyboard become the insurer’s EHR interface. Do you see the systems of providers and insurers being connected to meet each other’s needs electronically?

I do. Value-based care is really is the only way forward, but you have to align the incentives and the risks. You have to accurately measure and quantify outcomes that can be enabled with respect to access, quality, and cost. So, we need to be really clear by what we mean by and how we measure value. At the same time, as you look at this co-opetition of pay-viders, that new model or new business paradigm that can save money and be more efficient for one cohort is taking away the revenue and the profitability of another. There’s always going to be an inherent aversion, in the short run, to change from one business model to another. But in the long run, this journey is going to be Darwinian, in that individuals and organizations have to evolve or risk declining or going away altogether.

Should those who are holding useful healthcare data be paid to share it?

I think they should. That is what defines value. If you, as a payer or a provider, have to spend hundreds of thousands of hours to clean up that data and make it actionable, then it will be worth the cost and the value that comes from that. This whole notion of a clinical data optimization enablement that can leverage today’s API architecture is really what is foundational to enabling these new use cases. But the devil is in the detail, and it’s easy to talk about but so hard to do.

To make it the data valuable so that people are willing to pay for it, you have to do a number of things. You have to semantically normalize the data to national standards. You have to enrich it with metadata through streamline analytics. You have to reorganize it so it can be found in the expected clinical sections of a document. Then most importantly, you have to duplicate it and summarize it back into that longitudinal comprehensive record that you mentioned.

I’ve talked with so many clinicians and I’ve heard things like, “If you give me a 70-page CCD, it’s like 68 pages too long.” Or, “If you give me eight CCDAs for a patient, I’m not going to look at any of them.” That’s where the value is going to come. If you can save a busy doc time, then it’s worth it and I think people will pay for it.

I’m not a clinical informaticist, but I’d love to give you an example of why I think this can be so challenging and also so beneficial. Let’s say you have a patient show up and their record indicates that they’ve been prescribed the brand name drug Vicodin.That could either come across in the machine-readable or the human-readable portion of the document. The first thing you need to do is recognize that that brand name Vicodin is a combination medication of acetaminophen and hydrocodone.Then, you need to compute and reevaluate so that each ingredient can go into the respective RXNorm codes.

This all gets back to prior auth and how you need the right data to make the right decisions. After that, you have to leverage clinical grouping standards and indicate that hydrocodone is an opioid agonist and map that to the NDF-RT, the National Drug File – Reference Terminology. Finally from there, you can add on another meta-tag to indicate the severity of that medication in the case of hydrocodone, or Vicodin by transitivity. You can indicate that this medication is in fact a Schedule II controlled substance. All of this needs to happen to this transparent process.

If you can do that while maintaining visibility and data provenance, you have so much power. For example, you can make a query from a single field in a given state or region say, “Show me everyone within that region, or across the state, that’s been prescribed an opioid.” You can do that from a single field by having that metadata layered on top. Not just doing it for drugs, but for allergies, labs, immunizations, vitals, procedures, and demographics. That’s the opportunity. That gets back to that second tier that Eric Schmidt spoke about to enable all these different downstream use cases and business models.

How will the move to the cloud affect the possibilities?

It absolutely enables innovation and speed to value. It most certainly amplifies the network effect of propagating new knowledge and best practices. We are certainly seeing that across our customer base. I recall reading an interview that you did sometime not too long ago where one of your interviewees made the analogy that on-prem is like waterfall software development, whereas cloud is more agile, lean, and creating minimally viable products. That’s where the cloud has been so exciting, knowing that it can be secure, HITRUST and HIPAA compliant, and people can access that data and share that data securely anywhere. In our case, all of our clients, except a few that require an on-prem environment, are in a hosted environment in the cloud.

Where do you see the company in the next few years?

There’s a lot of interesting opportunities going forward. We’re going to continue to see a tremendous amount of data continuing to come in at exponential rates. I like to look to the future by looking back, and I’ll just share with you what I think might be of interest to your readers. When John D’Amore and I co-founded this company, we had this common vision to address and focus on what we believe is the biggest barrier in healthcare, data quality and usability. We heard of a physician named Larry Weed, a professor from the University of Vermont Medical Center. There’s this incredible YouTube video of him presenting a grand rounds lecture at Emory University over 50 years ago.

Dr. Weed so eloquently spoke to how the patient record cannot be separated from the caring for of the patient. The record is the patient, and that is the practice of medicine. He goes on to say how patient care is intertwined and how important the complete longitudinal record is in determining what the clinician does in the long run. So even 50 years ago, before the adoption of Meaningful Use and the proliferation of EHRs, Dr. Weed had the humility and the perception to recognize how the human mind simply can’t carry all that information without error. 

He also made that cautionary prophetic statement that we’ll either be a victim of poor data quality or we’ll triumph because of it. As we look at the volume of data, two-plus years into a pandemic, this is a hauntingly accurate prophecy. Enabling data in the largest industry in our economy to be actionable, accessible, and organized has never been more important. We are super excited about what the future holds in terms of continuing to improve data quality.

There has never been a more exciting time to be immersed in this world of healthcare IT, and in particular, data quality, or as Micky would say, sewage treatment. It has been an exciting journey. Working with such a special team has been so rewarding. I’ve always believed that the greatest product an entrepreneur can create is other entrepreneurs and leaders. As a rowing coach and a former coach and a rower, I would love to conclude with an analogy that I love being in this Diameter Health boat, being part of a crew that works so hard for a common goal. I can think of no goal more important than transforming healthcare and the ecosystem by enabling better healthcare with better data.

HIStalk Featured Sponsors


Text Ads


  1. Unfortunately, I can't disagree with anything you wrote. It is important that they get this right for so many reasons,…

  2. Going out on a limb here. Wouldn't Oracle's (apparent) interoperability strategy, have a better chance of success, than the VA's?…

  3. Dr Jayne is noticing one of the more egregious but trivial instance of bad behavior by allegedly non-profit organizations. I…

  4. To expand on this a bit. The Vista data are unique to Vista, there are 16(?) different VISN (grouped systems)…

Founding Sponsors


Platinum Sponsors











































Gold Sponsors