Home » Interviews » Recent Articles:

HIStalk Interviews Amy Andres

May 5, 2010 Interviews 3 Comments


Amy Andres is chair of the Ohio Health Information Partnership. She was interviewed for HIStalk by Dr. Gregg Alexander.

You have a diverse background. What do you bring to the table for OHIP’s (Ohio Health Information Partnership) Health Information Exchange and Regional Extension Center projects?

I know that a lot of people refer to my background working in the health IT industry, both at Allscripts and for CVS ProCare. I did some work for some software development companies.

Honestly, in this particular project, I think the area where I can be most helpful is my background and experience in the public sector. Bringing people together who may have diverse agendas or may be in a competitive situation, or an adversarial situation, and helping them come together for something that’s for the common good for everybody to cooperate in that environment.

I’ve had some experience with that, both at the Department of Education and also at the Department of Insurance. We have a lot of people with a lot of health IT experience at the table, and although I have it, I think the thing that I bring to the table is helping bring everybody together and see what the long-term good can come out of this particular effort.

OHIP is a public/private partnership. Maybe you could explain that give an elevator pitch on what OHIP does.

The thinking when this project kicked off was that there were the two main funding streams from the ARRA funding. One of those funding streams was intended for states to apply for those funds, and that was to support constructing a health information exchange. The other funding stream was designed for the regional extension centers. 

I think the way the feds thought about it originally was they would have this patchwork throughout the country. Not necessarily within state borders, but just throughout the country, there’d be a support system to help physicians adopt EHRs.

The way we thought about it is two-fold. One, it doesn’t seem like a great idea to have one group working on implementing the support mechanism for the physicians and another group building the system that they’ll be connecting to. It really made sense to bring all of those things together. The federal grant requirements allowed for the states to delegate the authority to apply for the HIE grant if they chose to do so.

What we did in Ohio is said, let’s reach out to the different stakeholder groups that truly are going to be the main participants of not only constructing this, but managing it long-term, and let’s all come together under one organization and do this together. For that reason, the Ohio Hospital Association, the Ohio State Medical Association, the Ohio Osteopathic Association, and the State of Ohio started in talks. BioOhio, who was already a non-profit entity and did some work in the space, also came to the table and offered up help to us get started and help us form such a public/private partnership.

Within a few months’ time, we really pulled that together and had those five entities get started with things. Then, in the fall after we applied for the grants and it became clear that we were going to be receiving some form of funding, we expanded to a full 15-member board that includes payers, behavioral health, federally-qualified health centers … We have consumer advocacy perspective, hospital members, and more, just really trying to bring together a diverse group that could not only give us the perspective for decision-making, but really help pull their communities together along with this process.

Are the other Regional Extension Centers (RECs) across the country working similarly? If not, how do they differ?

We’re not completely unique, but pretty close. I’d say the closest organization to us is a group in New York. Other than that, you mostly have the RECs and the HIE grants being made separately. We have had some feedback from some of the other RECs that that’s already starting to cause them some problems.

We’re one of the largest RECs. In most cases, you didn’t have a whole state form as a group. One thing I will mention about the regional extension center side is OHIP originally applied to cover the entire state of Ohio. So did an organization in the Cincinnati area called HealthBridge. HealthBridge covers the Cincinnati region, also part of Kentucky and a southeastern segment of Indiana. So they took their existing marketplace, both an HIE and they do REC-type services. They applied as well.

So what the feds ended up doing is they ended up reducing our grant slightly and awarding HeatlhBridge as well. For Ohio, it was a good thing because we ended up with substantially more funding, so it requires some level of coordination between OHIP and HealthBridge, which is not a problem. We’ve known those folks for years, have worked with them for years, and on a weekly basis have calls to make sure that we’re staying on the same page.

That’s one aspect that’s a little different, but for the most part, having all of one state covered by a REC is not common. Having it coupled with the HIE, I think there’s only one other circumstance. I guess Wisconsin, I believe is also that way. Other than that, it’s split up.

Is this the uniqueness that you mentioned one of the reasons you think OHIP received such a large chunk of the first-round funds?

I get that question a lot. Lots of people ask, “Who do you know in high places to receive this award?” I have to say this wasn’t a lobbying effort. The effort, really, just stood on its own of the model that we presented.

I do think that it helped that the administration found, and the stakeholders on the physician side came together and agreed to use, some funding that was leftover from a previous program to put up as a state match for the federal dollars in a time of a very tight budget. It was unheard of that entities would come up with that level of money for a match. I think that helps, that we were showing that we were committed to it as well.

I think the real reason that the feds gave us such a strong award is I think they see the merit in the model of having all of the stakeholders’ representation groups sitting on the board, and the level of involvement, not just rhetoric, actually, truly becoming involved. I think the feds recognize this is a model that could actually work and be propagated throughout the country. I think that they made a decision to make an investment in this model to see if it works.

EHR adoption and use timetables are exquisitely fast — very accelerated. Do you think that’s going to increase the odds of making bad decisions or failed implementations as the RECs across the country try to roll this stuff out?

There’s no doubt about it. It’s an extremely aggressive timetable. So aggressive, in fact, that some RECs … There’s definitely been some feedback and folks asking to adjust the timetables.

Here’s what personally I’ve observed in working with folks at the federal level. The interest to adjust timetables is not there. That’s going to stay, but what they have done is absolutely worked with us to try to remove the barriers that are getting in the way of getting there.

Although there was a lot of consternation, especially when everybody recognized at the same while we were in Washington that the timetable for this was really two years, not four years, I have to say that all of our board members — our initial five Board members were there — we didn’t have the same heart attack that some of the other folks had because we know our model. If any model’s going to get us there in this time period, it’s the one that we have.

Concerns over hasty decisions? Yes. When you speed up a project like this, that’s always a concern because you don’t have the time to run down every possibility and mitigate every single risk to meeting a successful project. When you’re in that situation, I think what you have to be open to is making adjustments once you recognize that perhaps a path that you were heading down may not have been the perfect path, and be willing to make adjustments as you go.

I think the other thing that’s key when you’re on this type of time period is to be really open and transparent with everybody about the risks of moving at this speed and establish trust with everybody so that when they see that maybe we made a decision that is not helpful in the process, that we’re willing to admit, yep, a change needs to be made and everybody moves on. I think that when you’re working at this pace, everybody’s got to be open and honest with each other and be willing to make adjustments when we realize they need to be made.

Some have expressed concerns that the RECs are not going to be transparent about how they’re making their decisions for choosing their partners, perhaps leaving some EHR vendors to be shut out. How do you address those concerns?

In our particular REC, our situation, we’re using a competitive process. As a matter of fact, that competitive process is going on right now. We’ve just released an RFP for preferred EHR vendors. We don’t know exactly how many we’re going to select, but we do know it will be more than three and probably less than ten. What we’re trying to get to is allowing for a manageable implementation and pricing that’s attractive for physicians right now.

Probably even most importantly, we’re looking for a commitment from vendors to Ohio. Right now, these EHR vendors, I’m sure, are expressing these concerns. They also have a market of the entire country that they’re trying to grab right now. As a group that has responsibility to make sure that this project doesn’t fall apart, we need to know that they’re not going to overextend themselves in our market, and that they’re going to be here. Once they get started here, they need to finish the job here and really be around to support it long-term.

It’s important for us that we work with vendors that are willing to make a commitment. We’re going to hold up our end of the bargain and do some things to support their efforts as well. There will be, absolutely — and there is already underway — a competitive process and several competent individuals scoring those responses to make our selection. If you’re an EHR vendor and you want to operate in Ohio and you’re not one of the preferreds, you’ll still absolutely be able to operate in this market so long as you meet the ONC certification standards. But we feel it’s important to use a competitive process to select a group of vendors that are willing to make a commitment to Ohio.

Are you saying the selection process is a transparent?

Oh, absolutely. Even though we’re not a state entity, even in the state system — which has probably a very high degree of transparency in the process — while the actual competition is going on, that information’s closed because if that information was released during the actual competitive process, it would give people an unfair competitive advantage. But after the process is completed, all of that information will be made public.

Will there be enough qualified people to help with the implementation, support, and training for all these REC projects? What kind of employees are you going to need with what skill sets and where do you think you’re going to find all these folks?

I have to tell you that that is probably, of everything that is happening within this project, that’s the thing that keeps me awake at night the most. The federal government awards grants to help with that over the long term, and in this project long term means three or four years out. That will be wonderful for long-term sustainability of workforce, but the problem that we have is that the mechanism that they contemplated to implement that through the two-year and four-year colleges does not produce a workforce when we need it, which is during this two-year push. We’re going to need it long term, but we really need some of those individuals right now.

When we were in Washington, it became very clear that the timing of that was going to be a problem. So when we got back to Columbus, the first phone call I made was to the Department of Development and the Board of Regents to see if we couldn’t put together a program for Ohio over the summer to produce, at least, the workforce that’s needed for implementation right now. We met with those folks, as well as a federal program that runs through Job and Family Services called the One-Stops. It’s a retraining program.

We’ve got a full team of people from each of the regional partners, from all of the two-year colleges in the state, the Board of Regents, the Department of Development, and the One-Stops. We’re putting together a very intense summer program to train individuals to do the office assessment and workflow support. Then, those individuals will either be employed by the regional partners — the regional entities that are part of our REC — or, they’ll be employed by the vendors. But, we know we need to create that workforce in Ohio. There’s some of that workforce, but not enough to get this job done and it’s a country-wide problem.

As we’re speaking about this, the other thing that we are contemplating is that we don’t want the EHR vendors coming in here bringing people from where they’re headquartered. We really want the workforce in Ohio to be Ohioans, and be people that stay here and support this long effort as systems are implemented. In part of our EHR process when we’re talking about vendors to partner with, one of those requirements would be that they’re hiring Ohioans to do this work. Our role in this is to make sure that there are competent Ohioans to hire for this process.

Every aspect of this project is truly going to have to be a partnership with everybody holding up their end of the bargain. I do, personally, see a lot of jobs being created out of this project. It’s not really something that’s talked about a lot compared to a lot of the other stimulus programs. What more is talked about is the tight timelines and bringing this up, bringing health information exchange structure and EHR adoption up to speed. But, out of all of this, jobs will absolutely be created. We just want to make sure that those jobs go to Ohioans.

A common theme within OHIP is the discussion of community. Why do you see that as being important, and how is the OHIP model addressing that approach?

I think that the OHIP model itself is the epitome of establishing a community around this.

Yesterday I had a speaking engagement with HIMSS. The discussion ended up turning into an hour of questions and answers, in a good way. People were very engaged. They were very excited.

I was there for another hour afterwards just answering individual questions and talking to folks. One woman said to me, “You know, this reminds me of a movement.” She’s like, “This is like you’ve got people coming out of the woodwork looking to volunteer the time and pitch in.” She said, “This truly has the makings of a movement.” When she said that I was thinking to myself, she’s absolutely right.

This is a situation where a lot of people who have wanted this to happen for quite some time see that if this is going to happen, this is it. This is our chance. People on a macro level across Ohio are coming together. What I think we need to make sure happens from this point is that same level of grassroots movement starts to propagate at the individual, local communities level. I think that that is the key to getting this done in not only an aggressive time period, but with less money than truly is needed to ultimately implement this thing. We have to contemplate a different model than the model that’s been used up to this point that, frankly, hasn’t been able to get us there.

The model that not only I believe, but several individuals who are working within OHIP believe, is getting that community level of involvement — getting physicians within their community working together on this and leaning on each other. The idea of bringing together groups of single practices, bringing those individuals together as a cohort and working through this together, it makes it more cost-effective for us to support that effort in that manner. But even more importantly, it gives them a peer group to work with as they’re working through their own problems. Certainly they can identify with each other going through this at the same time. We absolutely think that’s going to be the key to success in this project.

The next step is really bringing those communities together and helping them not only understand where we’re going with this, but understand that there’s support to help their community.

Are there any other points you’d like to bring up?

I guess just the final point, and perhaps I have spoke about it throughout this discussion, but this is one of those situations where you don’t see something like this very often. Where people who normally either are very strong competitors or have very different positions on how they see the world and how the healthcare system should work, or how health information technology should work — to see all of these individuals come together, not just rhetoric, not just the way that they’re speaking to each other, but truly their actions are showing that this is a partnership.

I’d say in my 20-plus-year career, I have never seen anything like this. It’s quite an honor to be involved and to be participating in this. I think a lot of others feel that way, and I think that’s what’s going to bring us to the dedication that’s needed to get this monumental task done on what is a very aggressive timeline. It’s just a pleasure working with folks on this project.

HIStalk Interviews Arien Malec

April 28, 2010 Interviews 2 Comments

Arien Malec is coordinator for the NHIN Direct project of the Office of the National Coordinator.


Give me a basic overview of NHIN Direct.

NHIN Direct is a project to expand the set of services that are available on the NHIN, but to expand them in a way that is accessible to the majority of providers. Particularly, the majority of the primary care providers practice in practice sizes of five or fewer. The lingua, the interchange, the health information exchange/interchange for those providers currently is fax. The major aims of this project are to create a set of standards that enable those providers to essentially replace the fax with electronic forms of interchange.

There’s really nothing new in the kind of health information exchange that we’re trying to do. We’re not trying to break new ground so much as standardize existing ground. A lot of HIOs get their start in provider-to-provider or lab-to-provider direct communication. Essentially, what we’re trying to do is standardize that and make it easier to plug in EHRs into exchanges and make it easier for HIOs to develop standard services for that kind of direct communication.

I’d also note that level of direct communication aligns very well with the criteria for Meaningful Use; particularly the requirements to exchange information at transitions in care, as well as receive lab data electronically and provide electronic information to the patients.

How would you characterize the differences between NHIN and NHIN Direct in terms of who will use them and for what purpose?

I’m going to carefully separate NHIN, as in the NHIN Exchange, from NHIN, as in the set of standards and services that are available. There’s some confusion about what’s what.

Both define the NHIN Exchange as the network of networks, as the network in the middle with standards that enables large, national health information organizations to exchange data with each other. A great example of where the NHIN Exchange would be useful is in coordination of care between a provider who’s using a state HIO and a patient treated in the VA system or in the DoD system.

All three of those organizations are, essentially, extraordinarily large IDNs. They are nationwide health information organizations because they cross and transcend state boundaries. That’s the core use case for the NHIN Exchange — coordination of care and information discovery across large, nationwide health information organizations. The core standards that are in use are common standards that can also be deployed within an HIO context, so if I wanted to discover where else a patient has been and what information is available about that patient, I would use the core NHIN services. They’re essentially the IHE interoperability stacks, particularly the XDS, XCA; that stack.

The way that I describe it, I’m going to paint two pictures. Picture one says that I’m a provider who is in an exchange that offers both services and I’m referring to a provider who only gets the simpler kinds of services, the direct services. As a provider with access to both services, when a patient presents, I may do a query to find out where that patient has been seen since the last time I saw them, and discover information with the patient’s consent that helps me inform the care of the patient.

Then at the end of that encounter, I might publish the updated physician information into the repository in the sky for future care providers to discover information. Those are great uses of the NHIN specifications and services.

Then, at the conclusion of that encounter, I want to refer the patient over for care. Let’s say it’s for care that isn’t served on the same EHR, where I can’t rely on the EHR’s capabilities to have the chart available. So I want to push a referral transaction over to, let’s say, the cardiologist. Then at the conclusion of the cardiologist’s care, I really want them to push me an update to what happened to the patient.

That transaction, by its very nature, just doesn’t fit the “publish something in the sky and then grab something from the sky” model. I mean, you could do it that way, but the semantics of that transfer are directional. I want to give the referral over to that provider and that provider expects to receive it in his or her inbox. Same thing for a lab. You might publish the lab to a lab repository in the sky so that all people can have access to it, but the ordering provider wants to get that lab result in his or her EHR directly as well. So you’ve got both publish semantics and push-to-provider semantics.

Pretty much all we’re about at the NHIN Direct project is to create the standards, the specifications for that push-to-address case in ways that allow an HIO or lighter weight organization to be able to provide an address for a provider or for a patient, and for the routing of a transaction to go to that address. So, there’s a lot.

Many of the HIEs created their business models around charging for that type of service. Will they use some aspect of NHIN Direct or is this a replacement or a competitor for it?

A lot of HIOs, I think for very good reasons. It drives a lot of business value. You get started with simple direct services. Nothing that NHIN Direct is doing should, or does, conflict with that desire. NHIN Direct will, hopefully, make those services easier to deploy because there will be a set of standards around them, and EHRs, hopefully, will have their standards embedded within the EHR so it will be easier to get services up and running.

Now if your business model is, “Well, this stuff is hard, and so our business model is to do it because nobody else can and we don’t want any competition and anything that makes it easier to do is a threat to our business model,” then sure, it could be a threat to the business model. I don’t believe that. I believe that making it easier, making it more scalable, actually makes it easier to offer those services at a profit for exchange sustainability.

As I said, I think if you look at the example of successful HIOs, they pretty much all solved this problem at the cost of some blood early on, and they’re able to offer these services. NHIN Direct is going to give them a way of scaling that service offering more, but I don’t think they think it’s a threat to their business. I think if you look again, if you look at the example of HIOs that are up and running and doing well, I don’t think any of them are scared by NHIN Direct. In fact, I think they think of this as something that makes their work easier to do.

What about those EHR vendors that have their own exchanges?

A lot of the EHR vendors — and you can go to the NHINDirect.org website and look at the implementation group to look at the current members of the implementation group and you’ll see a number of the leading EHR vendors out there — many of them are participating in this effort. I can’t speak for them, but if you look at the strategic situation, I think many of them would like to offer a set of value-added services on top of their EHRs for simple connectivity.

Many of them are in context where if you look at the state of HIT in the United States, very few providers operate in a service area where it’s all one vendor and where you can mandate and lock down a single vendor model. So, many of these EHR vendors have customers — oftentimes large health systems — who are asking them to enable interoperability within their products, but also across other products.

I think many of these EHR vendors see this as a way to fulfill their customer’s business needs in a way that is standard, and allows them to offer standardized services. I think the EHR vendors, by and large, have looked at this as an opportunity much more than they look at this as a threat.

John Halamka likes the idea of a health URL where individual data can be pushed. Would this support that, or is anybody working on that?

Absolutely. The notion of an address that you can route information to is a core principle of the NHIN Direct project. In fact, John’s recent blog post describes the work of the addressing working group in NHIN Direct. He’s a participant of the implementation group and he references, explicitly, the health URL concept in the context of what we’re trying to do.

What about privacy and security?

I’m going to back up. If you look at the record locator kinds of transactions — where has this patient been, what information is available about this patient — those are the transactions for which specifications and standards currently exist. There is a significant set of policy issues around that because the information holder is receiving a transaction basically requesting information and needs to decide, on the fly, whether that’s an appropriate information request, and whether the PHI disclosure that’s associated with that is proper and legal. Any of those systems that are up and running have put in place consent models and put in place policy models that ensure that data is only provided when it’s legally appropriate to.

In the set of push transactions that NHIN Direct is all about, the information holder and the initiator of that transaction are one in the same person or organization. The best way to think about the NHIN Direct kinds of transactions is that the data are going to flow, regardless. I’m going to send the summary of care to the provider via fax. I’m going to send it via paper. I’d love to be able to send it electronically.

The legal responsibility is pretty clear for this. It’s the information holder’s responsibility to determine whether the disclosure that they’re making is appropriate. Appropriate is defined by any of the HIPAA exemptions, as well as by explicitly getting patient consent to do the transaction.

What we need to make sure of in the transactions and in the policy framework around health information exchange is that if there is a disclosure along the way, that we know exactly where that disclosure originated from, we know who the legal entity responsible for the disclosure was, and also that we protect the health information and make it secure all along the way so it doesn’t inadvertently get exposed. We’ve got a privacy and trust working group that’s focused on those exact issues.

I think John’s post mentioned that it will be the same framework that’s used by the full-scale NHIN, not a lightweight version.

Exactly, so we’re going to be using TLS on both ends. We’re going to be ensuring that all the data are encrypted in transit. We would recommend that HIOs encrypt it at rest as well, and ensure that they’ve got the appropriate security policies.

The other part of this is that we’re just doing the transaction semantics. We’re just doing the specification. Somebody’s got to take those specifications and run them. The organizations that run them need to run those transactions within a policy framework. That policy framework needs to have much more in it than just transaction-level security rights. You absolutely have to encrypt the data in transit, but then you also have to make sure the exchange has the security policies in place; does security audits and remediation, has good quality assurance policies in place; has good operational controls in place to make sure that … you’ve got to secure the entire system and not just the transactions.

There’s a lot of policy work to be done. We’re closely coordinating the technology work that we’re doing with the policy work that’s being done, both at ONC as well as within the NHIN workgroup and the HIT Policy Committee.

Maybe you can expand on that thought because I’m not sure I understand. What you have is a set of policies and practices, but someone has to actually run it.

Exactly. The metaphor that I’ve used is that you’ve got cake? Cake is good stuff. You want to eat cake. Cake adds value.

We’re not making cakes in the NHIN Direct project. Somebody’s got to run a bakery to bake some cake. What we’re doing in the NHIN Direct project is creating a recipe for cake, and we’re making sure that recipe is well-tested and making sure it works across a variety of settings. That you can use a small bakery or a big bakery to make your cake and the cake’s going to taste just as good, regardless of where you bake it.

But, as an organization, our project is to create a recipe. You’re not going to get any cake from the NHIN Direct project. You’ve got to get your cake from a bakery.

Is there any centrally hosted infrastructure or services?

Not so far as we’ve discussed. There has been some belief — which we’re still going to need to explore this — that there are a couple of potential services that the federal government may end up hosting. One might be a central certification body, as well as a certificate authority to make sure that people who operate on the exchange are carrying correct policy frameworks. That’s the one potential role for the federal government.

They are, essentially, assuring trust. That’s a role that the federal government’s already taking on and is actually legally responsible to take on with respect to the NHIN Exchange, to the extent that the NHIN Direct services get incorporated into the NHIN Exchange. The federal government and ONC have a legal responsibility to create a policy framework for that. That’s one role that the federal government could play.

There are potential other roles the federal government could play, particularly around potentially using some of the information that we have around NTI; as well as that CNS is going to have to have a lot of paying providers for Meaningful Use as a way of making directory services that people might offer more valuable. But, we still have yet to explore or decide on those capabilities.

By and large, the NHIN Direct project will exit with a recipe and not so much with infrastructure.

Do you think the EMR products that are out there will be ready to share data once the platform is available?

With everything else in software, there’s a software development life cycle. There’s a set roadmap on capabilities. What I’m encouraged by is that so many of the EHR vendors are participating in the project and have committed to do real-world implementations. Not necessarily full-scale, real-world implementations, but have committed to doing real-world implementations. That encourages me that by 2011 we’ll have exchange capabilities at a broader scale to support.

What this is all about is supporting providers, both in terms of their obligations to get the money for Meaningful Use as well as supporting providers and patients in the quality and efficiency goals that we’ve set out for the HITECH Act. My hope is that given the participation that we’ve got that we’ll get a good amount of support for providers in 2011.

How would you turn all this technology concept into something that patients would understand? What would you say the outcome would be and when will they begin to see it?

As a patient, what we would hope to see is that a patient has interoperable access. Again, I think John Halamka’s posts on the health Internet address called the health URL are as good a place to start in understanding what this is all about. As a patient, I should be able to get a health Internet address. I should be able to give that health Internet address to my provider and say, “Hey, I want my information posted here.” The provider should say, “OK, no problem. I’ve got all the capabilities for doing that.”

As for when that will happen, I expect it will be in essentially limited operations by the end of this year. I would expect us to be in wider-scale operation by the end of next year.The way that I would judge this project being a success would be the number of providers who’ve got an address.

The other side’s the patient experience. That when I get referred over for care, get treated by a specialist, and then go back over to primary care, that the thing that I expect to happen — which is that specialist knows why I’m there and knows my health information necessary for treating me and that my primary care provider knows what happened when I went to the specialist — that all that exchange has happened behind the scenes with my consent, appropriately.

I think those two outcomes would be the way that I would judge the success of this project. My beliefs and hope would be that we’ve got a decent amount of availability to service it by the end of 2011, and then rolling on to wider scalability in 2011-2012.

What also makes me feel good about this is there are a lot of organizations that can do parts of this, and really all we’re doing is taking the best practices that a lot of these organizations are doing, and saying, OK, that’s great. We know how to do it. We know how to do it, even at scale. Well, we don’t know how to do it and do it interoperably so that you can share information between systems, so let’s focus on that.

Any final thoughts?

I think that if you’re asking about the fact that we’re not hosting, we’re not running any services. I think that’s the thing that people get extraordinarily confused by, and understanding that is real useful.

Another common question that comes up is, “What are you doing about content?” The project itself is focused on transport, but we’re sitting and working with all the other work that’s being done around content to make sure that the payloads that people exchange are interoperable payloads; and all the good work that’s in the IFR to help us constrain down to CCR and CCD, but also constrain down to terminology. We’re relying on that work getting better and more stringent over time so that we can share information, but then we can also understand the payloads.

HIStalk Interviews Pam McNutt

April 19, 2010 Interviews 11 Comments

Pam McNutt is senior vice president and CIO at Methodist Health System of Dallas, TX.


What would you say are some of the good and bad points in the proposed Meaningful Use criteria?

Well, let’s start with the good. What’s good about this whole HITECH legislation, I think, is that HIEs, or Health Information Exchanges, are going to be planned out at the state or regional level in a little more solid form than they have in the past. I think that’s good. Prior to that, we had HIEs being formed in regions, in cities, and sometimes even multiple HIEs being developed inside a single metropolitan area. Now with the grants, the states will be putting some thought and planning into what their state’s health exchanges will look like. It will bring some order to things.

Now what’s going to be difficult about the regulations that came out, I think there are a few main points there. First is the all-or-nothing approach that was laid out by the Meaningful Use regulations. Both AHA and CHIME have commented pretty strongly on that; that we think it should be more of a building block approach, rather than all-or-nothing. Meaning that if you can achieve so many, and that number could be debatable, of the objectives laid out, then you could be deemed to have achieved Meaningful Use, rather than having to do every single objective and every single quality measure. We’re very hopeful that will be given serious consideration by CMS. We hope.

What changes do you think will be incorporated from all the thousands of comments made?

I think we’ve already seen something occurring. The issue about eligible providers. In many cases, outpatient clinics of hospitals were excluded under the current definition. My understanding is that there’s been legislation that has passed both House and Senate now that fix that problem and should make physicians who practice in outpatient clinics of hospitals eligible for the stimulus funding. So, we’re already seeing that change.

I think we will also see some changes in the quality reporting requirements. Asking providers to be able to install and use systems that electronically calculate all of the quality assurance measures is asking too much. The CMS has been asked to hold off on that requirement until they’re ready to accept it and process it in that fashion. I’m hopeful that we’ll see some relief on that front. I think those are the biggies.

Then the third item that I think everyone’s concerned about is the basic timing compression that’s going on here due to the delay in spelling out the certification process. Since providers must use certified records, that’s kind of your entry point to even be considered for stimulus funding. We’re going to need to have our system certified, potentially, as early as October 2010 for hospitals, January 2011 for providers — eligible professionals, physicians. Yet the certification rules and process will not be finalized until perhaps as late as June of this year.

That’s going to make it very difficult for hospitals and physicians to potentially have to upgrade their systems if their vendor’s requiring that to comply, or to get the certifications from their vendors. This also puts, I think, a lot of pressure on our vendors as well; having to go back and get their products re-certified, if you will, through a process that’s going to be different than the previously required CCHIT certification process. This presents some real challenges in timing.

What are your impressions about the proposed rule for creating the certification bodies through the EHR certification and testing?

I’m working with the CHIME Policy Steering Committee. I’m actually the Chair of the CHIME Policy Steering Committee, as well as serving on the American Hospital Association IT Advisory Committee. In both cases, these committees are concerned with any provision in the certification process that would drive more providers — hospitals or physicians — to have to go and obtain certification for their portfolio of systems on their own, rather than being able to rely on vendor certification.

In particular, there’s language in the NPRM on what constitutes a self-developed system. We are going to be commenting on that and asking that that be more precisely defined, such that a provider’s minor modification or enhancements to a certified system doesn’t throw them into the category of self-developed.

The fact that they’re opening it up to other entities — is that a step in the right direction?

It’s hard to say. There have been criticisms of CCHIT and there’s also been kudos to CCHIT. It’s difficult to say whether, in the long run, introducing more competition into that certification process is a good thing or a bad thing. They are adding, though, more rigor in the permanent certification process that’s being proposed; which is that you go through certification, then you have to go through testing and the introduction of a concept of a certification body needing to do some field surveillance to make sure that the product is actually being used in the field as it was intended. These three components together could make the certification process quite complex in the future.

Given all these questions that are still out there, do you think vendors and providers will be ready?

I would be very surprised to see any provider, hospital or physician, qualifying much sooner than perhaps this time next year. I’d be very surprised.

I hear many of my colleagues say that they will not qualify for Stage One stimulus funds until 2012 or 2013. Kind of towards the tail end of when Stage One is still in effect. I think that’s going to be pretty common.

What seem to be the biggest hang-ups?

I think it just depends on where the provider or the hospital is at right now with their IT implementation plan. Many people are just starting an implementation of a larger integrated solution. Some people have some pieces firmly in place. Like ourselves, we have many pieces of an integrated electronic record system in place, but we have implemented the modules in a different order than the Meaningful Use criteria are dictating.

We’re having to change our strategic IT plan to go, for instance, and elevate CPOE to be something that has to be done within a year; and perhaps, drop some of the other plans we already had to expand nursing or OR documentation into other areas of our operations. We’ve had to switch our priorities because of this. Any time you switch priorities, it takes some time to start up and get that project going. I think that’s what people are challenged by.

Will you be ready in time?

I really believe that in our organization, we will be able to achieve Stage One. Now whether its next year at this time, or whether it’s a little bit later, is not totally clear to me at this point. But I do believe we will obtain it within the next two years.

How do you see healthcare reform and the ARRA legislation? Are they competing with one another, or do they actually complement one another?

We were just talking about that exact topic this morning in a meeting. I think we have three different initiatives out there right now that are on a collision course. Those initiatives are this rush to adopt electronic health records by 2015, the ICD-10 conversion that’s to occur in 2013, and then the IT implications of healthcare reform. Specifically, in regards to the new reimbursement models, such as bundling episodes of care and accountable care organizations. You have all three of those pretty much converging at the same time, and they all have IT implications.

I do believe that some of the things that are being done in the HITECH Act to bring about standardized adoption of electronic health records could help with the new reimbursement model. However, there’s so much more needed to do that than what’s in the HITECH Act. So while they are complementary to some degree, trying to do them all at the same time has me very concerned, especially given that we have heard statistics of a shortage of over 50,000 people across the industry via the healthcare IT industry.

It makes you wonder — where are we going to get the human resources? If money were no object, where would you even get the human resources to do all the work? Plus, we also know that introducing too much change into your HIT applications and infrastructure can cause instability in their operation. That’s concerning — introducing too much change at once, not to mention all the process flow and workflow redesign that need to occur for healthcare IT to be used effectively.

What, in general, do you think Meditech hospitals are going to have to do to get ready for Meaningful Use?

I think, again, it depends on where they are in their software upgrade cycle. For us and for many others that were staying current with the MEDITECH products — I am on the most current release level, which is 5.6 — that will be able to deliver the Meaningful Use criteria from Meditech.

For other hospitals that have not upgraded recently, I think it’s going to be more difficult. It’s not just the Meditech hospitals. Really, you look at any vendor. The queues for people wanting upgrades are very long. We’re hearing 12-18 months just to get a project started. This isn’t just with Meditech. This is with any vendor, because they’re being overwhelmed by upgrade requests.

I think that in the long run, Meditech hospitals will be in good shape; that Meditech is going to see us through these Meaningful Use requirements. But it’s not going to be easy, especially when it comes to the quality metric reporting. This isn’t a technology issue, this is, in some cases, a workflow process issue back at your organization. You can have all the fields in the software that you need to populate to produce a metric, but how are you going to ensure that those data points are collected? How are you going to instill that discipline in your workforce?

What were some of the conclusions that you drew from the HIMSS conference?

As many have probably observed, the conference was largely about Meaningful Use. Some conclusions that I drew personally, was that data analytics are going to be incredibly important over the next 3-4 years.

On top of all these other things we have to do to meet Meaningful Use internally, we are going to have to start, if one hasn’t already, to dive deeply into your clinical data to understand and engage in the creation of dashboards and alerts and other things to keep your progress towards achieving your quality metrics at the forefront of everyone’s attention in your organization. That is going to require, I think, some very sophisticated data analytic tools.

That was probably my big takeaway besides Meaningful Use, Meaningful Use, Meaningful Use.

What are your biggest challenges and most important strategies at this moment?

For Methodist, we decided three things that we’re very actively pursuing. One, no surprise after what I just said, is really looking at our quality data analytics. Two is ramping up to do CPOE with our hospitalist group as our pilot. Of course, that is to meet Meaningful Use. Then the third really important strategy for Methodist is to reach out to our physicians and help them achieve Meaningful Use by offering a hosted electronic record solution to them. We have ramped up in a very big way to be able to offer that to our physicians. We are hosting NextGen for our affiliated physicians in the community and are gearing up to have 100 or more on within a year. We’re growing very rapidly.

I would say those are our three main strategic IT initiatives. Throw on top of that that we are building new hospitals and we are completely integrating one that we acquired last year and converting them to our HIT structure.

I don’t think that’s that unusual. I think a lot of us have all the other challenges. A lot of people have all the challenges we’ve already been talking about, but on top of that, I think many larger organizations across the country are being approached, as Methodist was, by standalone hospitals that are looking and saying, “I can’t navigate through all the complexity that’s coming at me. I need to partner with somebody that can quickly bring me solutions.” I don’t think I’m unique in having that challenge on top of all the other ones.

HIStalk Interviews Sanjaya Kumar

April 14, 2010 Interviews 5 Comments

Sanjaya Kumar, MD, MPH is president, CEO, and CMO of Quantros.


Can you give me a two-minute summary of hospital-based pay-for-performance programs and how your applications help manage them?

It’s interesting that you say hospital-based pay-for-performance programs because I think the overall industry graduated into pay-for-performance, from the managed care all the way down to the physicians really having to basically be working within those programs. Now, definitely the hospitals are in the fringes of that, but I think, firmly, in my opinion, pay-for-performance now is moving into pay-for-results and pay-for-better-outcomes.

The way that our applications help support those needs for the hospitals that are participating in those programs is that we allow ready capture and aggregation of all of that data, either from secondary data sources, or they can actually input it into the applications for the different metrics that are supposed to be reported for the pay-for-performance programs from either CMS or Joint Commission. Primarily CMS. That data is reported quarterly.

The added benefit for our clients is that they can review all of that data in near real time, as well as compare and benchmark themselves with other hospitals in a blinded fashion, very readily. That’s one of the added values that we provide, so even before they’re able to see their results publicly reported out there for pay-for-performance, they can gauge and see how well they’re doing in comparison with others.

What impact do you think the publicly reported hospital quality and outcomes measures will have on the industry?

In the way that the information currently is made available, I think the impact is relatively slow in terms of its uptake. However, what it is a doing is that it’s increasing the degree of transparency around the reporting of all of these metrics and measures and education to the consumer of healthcare, in terms of their importance and their significance. Really, the metrics aside, I think the overall programs are helping to influence consumers in the way that they seek care, or where they seek care from.

I think it’s going to continue to shape that industry in terms of the transparency that needs to be there. It’s going to drive and motivate more and more hospitals to actually disclose some of all of the results to their local constituents and local marketplaces in order to influence purchasing behavior of either employer groups or health plans, in terms of how they’re contracting with them or in terms of where the consumers are seeking care from.

I think it will have a bigger influence, perhaps five years down the line, as this data begins to get even much more attention and begins to get publicized that much more. Today, the healthcare consumer is still very naïve about some or all of this.

Do you think they know the data’s out there and just don’t have much of a choice of where they go? Or, do you think they’re really unaware that there even is such a thing?

To a large extent, I think the awareness is coming about, but I think it’s slow on the uptake and slow on the marketing of some of all this information. The government could do, perhaps, more, in terms of highlighting the availability and where the data is made available from. More has to be done. I think more people need to get onto the bandwagon, in terms of shaping consumer attention to this.

Several years ago, the MedMARx database was used to generate a report suggesting that CPOE was creating quite a few medication-related errors. What are you seeing now from the database and with the FDA’s apparent interest in overseeing the whole safety issue with electronic health records, how do you see that playing out?

As you perhaps, know from the background, we actually acquired the MedMARx database about two years ago from the USP. We now currently manage that data registry. It’s got over 2 million records on medication errors and adverse drug reactions.

The same patterns that actually used to be there are still pretty much very evident. Meaning, although one of the patterns that has actually emerging and interestingly, we did an analysis for somebody that was actually writing an article in The Wall Street Journal a few months ago  that indicated that errors that basically are implicating CPOE systems definitely do occur, but the more serious type of events are being averted.

I’m meaning that more incidents that actually lead to major fatal harm are actually on the decline with the utilization of CPOE. Although, CPOE is still leading to a number of errors that are basically there within the medication error profiles.

I think that a real interesting conundrum over here is that there are multiple vendors for CPOE systems. I think CPOE systems need to be very carefully evaluated. I know that the Leapfrog Group, for example, has a CPOE evaluation tool that is made available to hospitals. For example, if they’re participating in the Leapfrog survey. I don’t know how much you know about the Leapfrog Group, but they’re all about transparency for safety and quality, in terms of practices that are out there and how safe institutions are. They actually make available a CPOE evaluation tool.

I think the FDA needs to look at tools like that very, very collaboratively, in terms of really encouraging the use of those tools to identify CPOE solutions that are right, that are fit, that are actually ready for use within point-of-care environment. There are a lot of CPOE systems out there that are not necessarily right that actually are leading to a lot of errors, or that are not catching errors, necessarily.

When we implement systems like CPOE, we automatically are almost aligned with clinicians just like a child with a calculator who doesn’t necessarily think through the computation they’re performing, but just believes in the answer that they are getting. That is the biggest problem with automation systems. We’re actually blinding the provider at the point of care with systems like that in order to believe them without having to really think through whether something is right or wrong.

I think utilization of CPOE evaluation type methodologies; critical review of those, the kind of errors that they lead into is very, very important as a go-forward strategy; especially if we are mandating that solutions like this be implemented, which is really pretty much what the ARRA, the HITECH Act, is providing for.

EHR adoption is going to create tons of electronic patient data from all these systems at some point. What do you think the best use is, both at the organizational level, and the population level, for all of that data that we’ll suddenly have available?

I think it will be a blessing for monitoring specific populations of interest. It will be of benefit to actually apply continuous quality improvement methodologies in place. It will be very, very important for organizations to actually utilize some or all of that digitized data to be able to tell more — regarding their own environment, regarding where care is actually being provided, how well care is being provided — from a combination of dimensions of interest: patient safety; quality; compliance with clinical care protocols; and adherence to certain standards of care from the perspective of providers, in terms of their outcomes that are actually occurring; and finances.

I mean, we shouldn’t forget that care should be really provided in a very cost-effective fashion. Today, it’s very, very difficult, for example, for an institution that is actually collecting all of the quality data, to actually even determine what the improved quality-to-cost ratio is. If they’re collecting all of this data to monitor and improve upon care for heart attack patients, if that is actually influencing a decrease in cost or length of stay for the institution, it’s very difficult for them to do that very readily today with the availability of data across all of these different dimensions. For the population-based level, it will become easier and easier to allow for such evaluation to occur.

I know that you have some products that relate to real-time surveillance. What are the results? What are hospitals doing with that?

Real-time surveillance, again, is applicable to only institutions that have good, digitized HL7-formatted data that is regularly available. Naturally, of course, that precludes the utilization of those tools today to perhaps less than 10% of the institutions within the United States. Those kinds of solutions really are easy to use, very readily configurable solutions that allow a clinician to set up rule sets — different rules that are very, very clinically sound, in terms of really what they’re looking for.

For example, a drug-bug mismatch. Based upon the data feed that is coming in from the lab, if a particular microbiology result basically indicated that a patient had a particular bug and was sensitive to a particular medication, it’s now going to look into the pharmacy order entry data set coming into the system to determine whether the right drug was actually being ordered for the patient. If the right drug is not being ordered for the patient, it will raise an alert. It will raise a flag. That flag is brought to the attention of the care provider.

Today, EMR systems do not have that degree of robust decision support rule sets built into them because it’s extremely customized and cumbersome to manage and maintain. The surveillance solutions that we provide take the HL7 data feeds and the clinician configures whatever rule sets that they want and have the alerts go out to the people that really need to be taking care of the patient.

I had one question about one of your products, your Clinical Café. Can you tell me about that?

That’s sort of like a pet project of mine that got conceptualized about three years ago at one of the IHI conferences. Really, the premise behind ClinicalCafe.com was to provide an environment … because Quantros services over 2,000 hospitals, we have a very rich collection of very like-minded people around quality, safety, compliance, operations, CMOs, and the like, that are basically part of the user community.

The idea was — how do we bring all of these people together in an environment where it can provide for them to connect with each other, be able to share best practices or information with each other so that they can collectively learn? Because what is working in one organization will probably benefit another organization. How can they be able to inform each other proactively, by setting up, maybe peer groups and things like that, so that they can collaborate? It’s sort of like creating an environment for utilizing social networking to really provide for a very rich collaboration for learning purposes and improvements in safety and quality. That was really the premise of Clinical Café.

It took about two years to actually gel the idea together. We launched ClincalCafe.com at the last IHI conference in December, the last one that was actually held, in 2009. We launched it over there and now we are integrating it with our applications to invite each and every user into the Clinical Café environment so that they can begin to interact with each other. It’s an open, shared, learning platform. It’s really not a proprietary platform. It’s really for any clinician, any provider, anybody interested in safety and quality, to be able to learn from each other as opposed to, perhaps, going to disparate sort of places.

Normally, learning is amongst each other. We tend to learn from each other much more readily. We share a lot of information with each other. Hopefully, the environment will provide for that. I would encourage you to sign up and become a member.

Anybody can become a member?

Anybody can become a member. The way that I like to introduce it is sort of like a Facebook for professionals within the healthcare community that are interested in improving safety and quality. I think, as of the last count, we had about 1,500 people that had already joined. It keeps on increasing by about 30-40 people every day. Hopefully, we’ll have a critical mass very soon.

This may be a question that you’re not comfortable with answering, but I’ll ask. If a hospital, from your perspective, wanted to make IT investments right now, with patient safety in mind, based on what you know and what you’ve seen and what your data tells you, what kind of technologies do you think would make the most sense?

There are lots of technologies that people are proposing out there for the point-of-care environment that I don’t think are necessarily ready. I think there needs to be a lot more critical evaluation done, in terms of the technologies that are being proposed out there. Very, very few vendors out there have ready technologies that would allow for very effective solutions at the level of point-of-care. A lot of the EMR systems out there are documentation systems. They’re not necessarily aligned for collection of very good, discrete data elements that can be utilized in a very meaningful fashion. We have to still evolve.

We should be very careful because some of these purchases are going to cost millions of dollars and they’re not necessarily that easy to replace because you’re going to embed them within the fabric of your enterprise. A number of people will have to work with them.

The other issue with a lot of point-of-care systems is clinicians are very, very averse to any new technology or change management. I think adoption of these technologies, ready usability if you have some of all of these technologies, is very, very important. I’ll highlight one key important point. For example, if I’m a physician and I’m practicing at three different institutions, and all of those three different institutions are using three different EMR or EHR systems, I now have to learn three different systems to interact with and actual work with. How can that EMR system just basically take my credentials and, perhaps make that into a common user interface for me that is to my liking?

I think technologies like that have to basically be researched, and I think we’re still at the very nascent stages of that evolution.

Over the next five years, what are your plans for the company?

The plan for our company is to continue to confirm our position as an industry leader, in terms of what we provide for furthering patient safety and quality and transparency for some of all of the data that we help our institutions collect and report. Our focus is squarely in terms of how do we bring about more actionable information for key stakeholders within institutions to be able to address patient safety, quality, and their business? The outcomes for improvements related to the implementation of EHRs or EMRs is basically safety and quality. That’s pretty much what everybody is proposing out there.

Our focus is squarely on that, and as a company, we are hoping that we’ll be the majority industry leader, in terms of really providing that very soon, I’m sure that if you follow us you’ll begin to see some of all of that even come about further.

Any other thoughts?

I do want to again highlight the value of environments like Clinical Café. I think in the new day and age of younger people that are actually beginning to interact with learning tools, or collaboration platforms, I think social media has a long way to go with it. I think mobile technology has a long way to go, in terms of furthering some advancements within healthcare that we, perhaps, have not necessarily taken. The work being done by organizations like Cisco in healthcare, with their collaboration platform, is very, very innovative as well as very entertaining to see that come about from a provider of hardware solutions like that.

I think the industry is going to want to see more and more, in terms of solutions like that, because they will provide for the easy, shared learning that needs to be there. We shouldn’t be closeted about certain things that are working within our environment and not really have them readily shared because they will save lives and improve the care that we provide to people. We all need to be able to benefit from all of that very readily.

I think technologies and the adoption that is currently being driven by the ARRA or the HITECH Act, yes, they push it very good. I applaud the push; I applaud the benefits that the government is actually providing for some or all of that. But it could also provide for a very ready environment for very rash decisions that might not necessarily further an organization’s goal because they’re really being pushed to that. Each and every organization doesn’t necessarily have the core capabilities to help address or evaluate each and every solution that they need to be implementing that is being required to meet all of the ARRA or the HITECH Act.

Four years is not necessarily a very long period of time to implement some of all of these solutions. I’m hoping that through your blogs and your talk, you actually are highlighting some of all of that, because it could push the industry in the wrong way as well.

HIStalk Interviews John Halamka

April 7, 2010 Interviews 18 Comments

John D. Halamka, MD, MS, is chief information officer of Beth Israel Deaconess Medical Center; chief information officer at Harvard Medical School; chairman of the New England Healthcare Exchange Network (NEHEN); chair of the US Healthcare Information Technology Standards Panel (HITSP)/co-chair of the HIT Standards Committee; and a practicing emergency physician.


How would you describe, if you had just a couple of minutes, how stimulus funding will change healthcare IT as an industry?

If I look at my own region, we have docs who were all waiting on electronic health record implementation because there wasn’t a value proposition. They said, well, gee, you know I can get this Stark safe harbor, I know the hospital can help out, but still, my office manager’s going to quit. I’m going to lose productivity for three months … what a hassle.

Now with the HIT stimulus funding, they say, “Wait a minute. I get 85% funded by the hospital and I get to keep the $44,000 when this is all done? OK, where do I sign up?” It’s truly accelerating physician adoption by motivating them to move forward.

What I really like about Meaningful Use is it is constructed so that the doctors are paid only when they’re done. That is, it isn’t go buy hardware and software and it’s going to be Christmas for vendors. It’s the fact that docs then have to e-prescribe, and docs are going to have to share data with patients, and docs are going to have to use quality measures. Only when you do that do you get paid.

The mindset of the clinician is, “Ah, I’m going to do it, and now I know exactly what I have to do. Help me out.” So I, as a hospital organization and my community, can work together to make all that happen. It’s an alignment of industry, academia, and practices like I’ve never seen before.

Do you think there’s a risk that they’ll get enticed enough to at least start the journey, but then because of usability issues or just lack of time, it will never really go anywhere?

I wrote a blog, which some people have criticized me for, that said I actually trust ONC. David Blumenthal and the gang he has put together are very good people. If what they discover is that, as we are actually rolling this thing out that there are barriers, then I believe they’re going to help everybody work through the barriers.

I really don’t think that this is a disconnected ONC that is going to force us to do things that are too hard and are going to first, people, as you described, to begin the journey and then fall off. What they’ll say is, “We’ll build the toolkit. We’ll help with the accelerators. We’ll break down the barriers. We’ll make sure you have the resources.” I actually feel good about people getting to the finish line.

Do you think it was a mistake to combine what should be a fairly thoughtful introduction of electronic health records with the urgency of stimulus funding?

My experience in healthcare IT is, unless you create a sense of urgency, nothing gets done. I would rather see us all move forward with great haste and get as far as we can, then along the way do a mid-course correction, than to say, “You know, we’re going to wait five years and then we can get it perfect.” There’s a lot to be said for moving the industry forward now.

These are not new products — they’re the same ones doctors didn’t want before. Do you think there will be some buyer’s remorse?

I love seeing the vendors react by creating new functionality. Certainly they’re much more open to healthcare information exchange and patient engagement than ever before, so in some respects, yeah. It may be products that have existed, but there are feature sets that have never existed.

Then with the modular EHR certification approach that’s been proposed, there’s a capacity for combining many EHR and EHR-lite together in a way that’ll get docs started. I think there’ll be new market entrants, but new features. I don’t think it’s going to be business as usual.

Will there be time for new market entrants given that people have to get on the train really quickly?

I’m now driving, actually, through Westborough, Mass. where eClinicalWorks is located. What I’m seeing these guys do is focus on patient portals. Something called provider-to-provider exchange. It’s like a Facebook function. They’re introducing all this new stuff very, very quickly.

I know the timeframes are crazy, but they have been able to innovate to adapt to ARRA requirements pretty rapidly. You’re seeing Athena move out its athenaClinicals product pretty rapidly. Software as a Service is becoming more and more common, and probably it’s because of thinner, Web-based Software as a Service architectures they can move fast enough to meet some of these deadlines.

What do you think is the majority of the work that needs to still be done to really get us down the path to getting potential benefits?

80% of what I do is people, training, workflow redesign, and process re-engineering. Only 20% is the technology stuff. When I write blogs about this stuff, I just focus on the workforce, focus on the people, and focus on the change management. That’s all the really hard work.

Yes, there are things that have to be done in Washington; and as you’ve seen coming out of ONC in the last week, consent models. How, if we’re going to do information exchange, do we ensure the patient controls the flow of their information? How do we do simple things like controlled substance e-prescribing, making sure that the workflow around writing Lipitor and writing OxyContin is pretty similar? How do we ensure that?

No interoperability’s ever going to be totally plug and play, but if it’s not USB drive kind of plug and play, can at least it be a couple hundred dollars, not a couple thousand dollars, to get a lab interface? It’s the work on specificity, on content, and transmission that still need to be done. All of this stuff on process transformation and workforce development, primarily, and then some of these things the Policy Committee and Standards Committee are doing on privacy, doing things like e-prescribing clean up and making the standards easier to use and more prescriptive.

Do you think federal funding makes it too easy to forget there are workflow changes involved?

I just met with these folks at Lawrence General and they had thought they were ready to go into a procurement phase. I said, well, let’s look at what Meaningful Use really requires. You know, what is your strategy for your local public health interface? What is your strategy for bi-directional data exchange for the community? “Oh yeah, this is a whole lot about workflow, isn’t it? It’s not about bits and bytes.”

As people begin to understand Meaningful Use, they really will understand the community and the workflow and not just the products.

You mentioned privacy. Are there currently debates going on about what form that should take or who should be involved?

I think there are two kinds of architectures that will protect privacy. One of my favorites, of course, is the idea that the medical home, the patient, becomes the steward of their own data. We send the data to them and they elect privacy preferences — who and what they’re going to share with.

Alternatively, of course, there is the clinician-to-clinician exchange. That is really going to require a persistent declaration of patient privacy preferences as to OK, if I the patient am not going to directly control it, how can I declare my preferences of those who do exchange data; whether it’s providers, payers, public health, etc. always use my declared privacy preferences when data is being exchanged?

The HIT Standards Committee, over the course of the next few months, is going to be taking testimony on what standards exist that will help support such a thing. That in combination with work on the policy side and such things as the consent white paper, I hope get us to a place where either EHR to EHR or EHR/PHR/EHR exchanges are ultimately controlled by patient preference.

You mentioned that a lot of data will be collected and exchanged. When will we start seeing the benefit of all the EHR-created data that isn’t out there now, and who do you think will use that to advance the practice of medicine?

Of course, 2011 is more about getting the data in electronic form to begin with; and 2013 more about getting data exchanged. But some beacon communities. some early adopters, I think, by 2011 are going to have substantial improvements in data sharing.

In the Boston area, I funded the creation of a quality registry for 1,560 providers that are loosely affiliated with Beth Israel Deaconess so that we begin to do all of our pay-for-performance, all of our PQRI and Meaningful Use reporting, as a community, rather than as a bunch of individual point-to-point connections. We’re doing public health reporting for the city of Boston in a common way as a community. All of this will be live in 2011. So for some, 2011. For many, 2013. For the majority, 2015.

Do you think there should be a relationship between having more technology and being able to deliver care less expensively?

That is a very good point. What we all want to achieve is high-value care where reimbursement is based on quality rather than quantity. I think the answer to your question is a couple-fold, but everything that I do these days is Software as a Service. I’m able to deliver an EHR at a lower cost than normal because the fact that I have so many clinicians sharing resources, sharing a data center, and sharing interfaces.

My hope is that I can at least, from my IT perspective, reduce the cost of implementing Meaningful Use. Then, we will gather data from a quality perspective that can be used in accountable care organizations and new mechanisms of reimbursement so that, as you pointed out, reimbursement will be fair based on the outcomes that are achieved.

Do you think technology is ready to help offset or mitigate in some way the shortage of primary care physicians?

This is an excellent point. What you hope, coming out of healthcare reform, is differential payments for primary caregivers and accountable care organizations. If I look at the Harvard Medical School experience, the number of folks going into specialty or procedural areas far exceeds those going into primary care. If you’re going to have effective reform, if you’re going to have lower costs, we need more primary caregivers.

Sure, as you point out, maybe technology can help us use extenders wisely so that whether that is some tasks can be delegated to nurse practitioners, physician assistants; some decision support can be offered in the Cloud so that we are delivering coordinated and better care more effectively by using technology rather than physician time for every intervention. All of this still presupposes that we have the primary caregivers who can actually be at the center of the medical home. In my view, you need to redo reimbursement so that the primary caregiver is the one making more than the specialist, not vice versa.

What about telemedicine?

We use telemedicine today to connect rural or community hospitals or emergency departments with downtown Boston for the provision of such things as stroke consultation in real-time for the administration of TPA in stroke. You’re able to leverage the academic health vendor in a far greater reach through the use of telemedicine.

I’ve had a lot of experimentation with remote visits, home monitoring, and again, leveraged telemedicine as a mechanism of making a primary care physician more efficient. Actually, the patients like it because they don’t have to travel into the city. Or, doing interventions like measuring blood pressure, measuring daily weight, and then having a team of nurses doing home care remotely and keep people out of the hospital. I certainly agree that telemedicine can have a role in reducing cost and using time more efficiently.

What do you think the Nationwide Health Information Network is going to look like and when will we start seeing it deliver benefits?

You’re probably familiar with the NHIN Direct efforts that have been kicked off over the last two weeks. The idea of a NHIN, obviously, it’s a set of policies and some open source technologies in reference to implementation to exchange data among various participants and provider, payer, government, etc. In NHIN Direct, the idea that there are some interactions that are simpler — pushing between two doctors, pushing to the patient.

Actually, what you hope is if this becomes a fairly thin, Web-based mechanism of sending data from point to point at very low cost. Here’s an idea. What if every person who wanted to participate in a patient/doctor exchange could sign up for a healthcare URL? Many people — Microsoft, Google, Dossia, who knows, various software vendors — could offer this health URL and all you need to use it is you take it to your doctor and say, “Doctor, here’s my health URL. Every time there’s an entry in my record in your office, push the data to this health URL.” There’s no HIE, there’s no transaction fee, there’s not a lot of complex business structure needed. It’s just an HTTPS post.

What I hope is that sure, for governments, for larger organizations, there will still be a NHIN that has quite a lot of security in its infrastructure, But you hope for a lot of connections that can be as simple as the home banking connection you have with an HTTPS post and it just bakes right in to every EHR.

Some of the folks that have gone into federal service work lately are interesting, like Todd Park and Don Berwick. What do you think that means that people who aren’t lifelong civil servants are popping up out of the private sector and going into federal work?

Knowing Aneesh and Todd and Don Berwick pretty well, these are people who have passion. They’re now able to see change is possible and resources are available. I think they believe that, in the current administration and the current time in history, it’s not business as usual and they’re willing to put in their energy and their passion to making change.

That’s why I write in my blog, these truly are the good old days of healthcare IT. I know I’m putting a significant portion of my time into state and federal efforts on a volunteer basis just because I believe I can make a difference.

You mentioned that you have a lot of respect for ONC as an insider to this whole process. Was the outcome what an idealistic person would have expected, or was this such an ugly compromise that nobody leaves happy?

I will tell you, sitting in the HIT Standards Committee and the Policy Committee and on calls with ONC; the amount of positive energy, as opposed to the amount of negative energy and compromise, is totally different than any other process I’ve been involved in in the past. People who have very different opinions come together and they say, “God, here’s what I want to achieve to improve patient care and quality and efficiency.” Everyone says, “Well, there’s two or three ways we could do it.” I’ve seen harmony rather than ugly compromise come out of each of these processes. That’s why I’m very optimistic.

When you look at your own organization, what are your biggest challenges and highest priorities at Beth Israel Deaconess?

I’ve laid out a 25-step plan to implement Meaningful Use across the organization. The hardest part of it is it is not just one actor. It is not just a hospital in an island. It’s ensuring that you have trust in your community so that you can do these data exchanges across the various providers, public health, payers, and government. It’s been relationship-building more than technology implementation, in my 25 projects, that’s my hard work.

Is there anything else you wanted to talk about?

I just have to say that you do a great service for humanity. Somebody has made this comment to me, that you have become not the National Enquirer, but The New York Times of our industry. It’s built on transparency. People, just like all the stuff I’m trying to work on, are no longer afraid of this special interest or that special interest. It’s everybody opening up and just trying to get the job done. I think you’ve been a big part of that.

HIStalk Interviews Gary Cohen

April 1, 2010 Interviews 2 Comments

Gary Cohen is executive chairman and CEO of iSOFT of Sydney, Australia.


iSOFT is a significant global player in healthcare software, but not maybe as well known in the US. I’m interested if you have plans to increase the visibility and presence now that you’ve started with iSOFT Integration Systems.

I think that the US is the process of going through an enormous transformation both in healthcare reform, as we speak, and obviously in relation to some of the effects of the ARRA legislation in relation to how healthcare IT can change the way healthcare is delivered across the US. There is quite a lot of disruption, I suppose, in terms of the US health economy, which is bringing change.

I think that is probably the point I wanted to emphasize. I think that provides significant opening for us, I believe, particularly where we have specialized around socialized healthcare or healthcare that is more distributed rather than just obviously utilized in the hospital, or utilized in a private care facility, or whatever. But the movement of information around that network, whether it’s between the various facilities inside a hospital or the various facilities that can make it to a hospital or may interact with that hospital, such as community and so on.

The architecture and the way in which we have built our latest generation solution, Lorenzo, has obviously been around that socialized healthcare model. I think when you look at one of the requirements for Meaningful Use and a lot the climates for performance-type process; you’re going to need — particularly, as chronic illness processes involve a lot more interaction with many multidisciplinary people in a healthcare environment — solutions that enable that sort of coverage. I think that’s where we do see a significant value.

With that in mind, we think we have a technology that is probably quite suitable for the US environment. Therefore, we do look to the US in terms of increasing our exposure there in a variety of ways.

Can you tell me what areas that you’ll specifically target? Will Lorenzo ever be sold in the US, or will it be strictly integration tools?

The answer would be in the longer term, yes, we will be looking at a way of bringing Lorenzo into the US. It’s no secret that we’ve worked very closely with our partner CSC in the UK program delivering Lorenzo. CSC has made a very significant investment in getting to understand, from an integration and delivery point of view, the benefits of Lorenzo into the market. I think one of the things that we would see is that working with organizations like CSC, we believe have significant benefits to the US market. That is a longer-term plan.

I think that what we need to do is look at what is going to be available over the next 6-18 months that is going to be suitable for the market as against what might be available well beyond that time. I think there are various products and components from Lorenzo because it’s just, if you think of Lorenzo not as a simple solution, but an architecture and as a platform with many solutions, then we are able to reconfigure Lorenzo in a form that is more suitable to some parts of the US health economy. So, ignoring the integration solutions that we have — which we supply already in the US — which aren’t unimportant, but are one facet.

We are looking to build a suite of solutions with that integration engine and Lorenzo applications that might, for example, target health information exchanges, target aggregation solutions, target solutions that are able to provide an umbrella framework around which other disparate systems can be integrated. But at the same time, adopt workflow processes into that rather than simply just adopting an integration solution or adopting a viewing platform, in terms of how you might aggregate solutions up through a portal or whatever. Look at some similar ways that Microsoft is targeting to aggregate solutions, we would see similar ways of moving down that path.

I think aggregation and dashboard-type solutions, business intelligence, solutions that compliment that process; so if we were able to bring some added value into that equation, such as, we’ve got a multi-resource scheduling solution which we have recently added to our suite that would help enable some of these organizations to do things that they’re not doing today. If we can start to surround some of the aggregation and solution into a complex healthcare delivery, I think that we’ll fill a niche that will keep us busy for quite a few months.

Obviously, when you have the add-on or wraparound solutions, then you have to get in front of customers or find partners. What do you think it will take to be positioned to get the word out to compete while the money’s beginning to flow, but there’s a narrow window before it will be gone?

I’m probably a bit more sanguine in the sense that I don’t think it’s just going to be a short-term window. Inevitably, I believe it’s going to be a much longer-term window than people imagined. But there is a window, so let’s accept that. It is going to require probably a few things from us.

One, it’s going to require us to build a reasonable-sized platform in the US in one form or another. That could take a number of forms. That could take a form of — and these aren’t necessarily, mutually exclusive — investing more resources ourselves into the market, which is what we’re doing. We’re building not only what we’ve got around, what would be required in Boston, but we are bringing more and more resources out of our UK facility, our European facility.

With some of the people, rather than basing themselves in Europe; we’re relocating them and basing them in Boston. That’s starting to add some more high-level, intellectual-type fire power to that. We’ve recently recruited a senior operations director from Carestream who was formerly a CTO of Kodak in healthcare. He’s a global position, but based in the US as well. We’re starting to populate that.

Secondly, we are looking at a significant number of partnerships that we can engage within a more meaningful way, both from a distribution point of view as well as a technology point of view. Those discussions are becoming more critical and intense, and we hope to get some significant progress in those in the immediate future.

Three, we’re also looking at acquiring a platform. Obviously, that would in turn mean that we’ve made a more significant position in the US through that platform than we could leverage a lot of our own products and technology in that platform. That’s another discussion. I’m certainly not going into much more detail in that, but they are all on the table for consideration.

I don’t want to press you on the point, but when you said “consider acquisition of a platform,” did you mean a hospital information system or an integration platform?

I think for us, there are two parts to our business model. We have a lot of product outside the US, and many of those products, over a period of time, will be very valuable inside the US market. I’m not saying that they may not need to ultimately get referenceability in the US. You did ask me what else we require, and obviously referenceability inside the US is going to be important for us.

But then secondly, you need to have significant capability from sales and marketing and distribution, and so on, inside the US, in terms of scale. Obviously, that’s something that we’re giving serious consideration to how we achieve that scale from a sales and marketing point of view, and distribution.

The second element is in relation to technology. Most countries have technology that is very country specific because of functionality. If you look at most health information systems on a global stage, whether it’s the patient management system or it’s the financial solutions or whatever, there are certain things that are not ubiquitous and they require very point solutions. There’s no doubt that the US is equally prevalent with its own specific solutions for certain areas.

It may be useful for us to look at ways in which we could either partner or work — whether through acquisition or partnership — with companies that have certain solutions, but don’t have other solutions. That’s one of the things we’re closely focusing on, and those solutions would have to be complementary to our product suite. For example, if there is a hospital information system company in America, that, per se, doesn’t really add a lot of value because we have a lot of value elsewhere in the world, right? Just going and acquiring a HIS solution or partner with someone with HIS solutions wouldn’t necessarily be as complementary as something that might be more synergistic. They’re the sort of things we’re looking at.

How would you grade the progress that’s been made and the value that’s been delivered by the NPfIT project?

That’s a very pertinent question. If we strip all the emotion out it, and the political dramas and the theatrics that go around it, I think you’d have to say there are some parts of the UK program that have been enormously successful — have done very well. Other parts which are in progress, but for which the progress probably has not been as fast and as good as it should have been, from a holistic point of view. If you look at the overall arch, is the program — in terms of its over-arching ambitions and what it’s trying to do — a good program, and is it going to get there? I think the answer will be, absolutely, yes.

I think really, if you take all politics aside, I don’t think anybody would suggest that the program’s going to stop and it’s all going to go backwards; because really, there’ll be a no man’s land and they’ll not have any viable alternatives. By the way, that doesn’t necessarily mean that the end is worth it, but if you look at where they are trying to get toward, what they’re trying to achieve, I think it’s fair to say that the goals and what we set up is, and are, very good.

I think the problems exists that some of the ambitions in the way in which some of the things have been done have been too ambitious and probably haven’t had the necessary capabilities around their systems to do it as fast and at the pace in which the goals that were set by the NHS and by the government at the time and therefore, set forth expectations, in terms of time scales, that meant that it was much more difficult to deliver. Therefore, people could then always refer to the fact, “Well, you promised X on a particular date,” or, “You promised X within a year or two years.” Once you pass that date, you can always refer back, “Well, the program’s late.” The more you say it doesn’t make the program any later, necessarily. It just is late, right?

There’s no doubt that the development of the spines that connect the top and bottom of England together to enable records to be transmitted through the health network has been a very successful development. There’s no doubt that the connections of the primary care facilities onto that spine, and most of the hospital institutions onto that spine, have delivered enormous, potential capabilities in the way healthcare records can be transmitted, as well as the admission and flow of information into hospitals and so on, by doctors.

Thirdly, there is the digitization of radiology and some of the diagnostic solutions, has been very successful. The more difficult part of the program, if you like — and it’s difficult because it is complex, and probably the ambitions to do it — were to put in place the electronic health record solution in each hospital trust. To basically replace all the legacy systems that existed right throughout all those trusts. I think it’s that part of the program where the difficulties occurred. I think it’s that part of the program that probably should have been done in slightly different stages, but it is that part of the program which ultimately will lead to the biggest benefits, and ultimately will lead to a successful outcome. It is on track. It is late. They need to accelerate deployment and they need to accelerate some of the expectations around delivery.

I think the NHS have probably appreciated the complexity a lot more themselves, and have probably reshaped the program and are currently reshaping the program to ensure that it is going to be able to reach some of those goals more quickly. But that’s probably a small snapshot. I’m happy to elaborate if you want me to, but that’s basically a small snapshot.

It seems that in the UK, you can’t separate the politics from the technology. Do you think that there will be similar challenges in the US as the federal government gets more involved in healthcare IT and gets equally ambitious to roll out these huge national projects that are certainly going to involve some uncertainty and some huge expense?

In my opinion, healthcare is a social thing. At the end of the day, part of the problem is that you just can’t leave it to private industry to sort out the problem because it’s so interconnected to the political fabric of a country in one way or other. Whether directly or indirectly, we all contribute to the healthcare budget. You probably don’t really think about contributing to a budget of a large corporation, if you will. Healthcare always has a very large public sector element into it, in some form or other, whether subsidized or for social reasons.

Government does need to get involved, and I think part of the problem is government is never sure how evolve itself. Part of the experiment in the UK, which was probably good and equally bad, is that they got involved, but probably the way they got involved could have been better framed. The UK’s a very specialized thing because use of national healthcare system, principally, and controlled centrally even though it might be distributed through various bodies like NHS trusts and strategic health services and authorities and so on. It’s effectively a centralized controlled system; whereas the US is a far more fragmented, non-centralized controlled system where the central government tries to either help with policy designs and so on, but allows industry to make its way.

I think if the federal government or the national government in the US were to be far more active, in terms of programs and structures, then probably one of the things it would learn from the UK is, perhaps, to ensure that there is far more participation at an earlier stage. There’s far more buy-in, and there’s far more flexibility into the system. You need to have a system that doesn’t just pick winners, but allows the market to pick the winners while at the same time, ensuring that you encourage the market to go out and spend to pick those winners. You might put incentives and rules and programs in place, which is a bit like what ARRA’s trying to do, and then allow the market to do it.

It probably needs to be a bit further along than just where it is at the moment, but I think the more that a government tries to identify itself with one or two parties — even if they are the right parties — then everybody else is disenfranchised and they become enemies. Then they spend their life just chipping from the sidelines, which is fairly what happens in the UK today. It’s much better at the end of the day, I think, to allow the market forces to select that in a way that isn’t necessarily centrally driven, but the programs are centrally driven.

Richard Granger was really hard on NHS vendors, making them compete and telling them they would be replaced. But looking back, there almost weren’t any contractors left and now the government is trying to loosen up the payments because they were too tough. Was there a lesson learned about how hard you can push a vendor?

In my opinion, whether it’s at the smallest end of the scale or the largest end of the scale, you need a partnership for delivery of healthcare solutions of a complex level. You’re not going to a shop and buying a piece of commodity and walking out and you don’t have see the shop keeper again. If that works, or doesn’t, you don’t really have a relationship with the vendor of that software. You just put it in your system. It either works or doesn’t work. You might be pissed off with the vendor, but that’s a reputational issue. You don’t really have a relationship.

Complex healthcare delivery solutions at the level we’re talking about require a very significant interaction and partnership between the providers and the integrators and the government, or the providers of the services — the users. If you don’t have that partnership, and because you dictate terms that become more and more unreasonable, if that partnership starts to get one sided by either side, then basically that relationship starts breaking down as the complexities of the solution, which often requires a lot of flexibility, and as time goes on, changes of understanding of the market and things.

If you look at the UK, this was designed back in 2003, right? I think it got underway in 2004, six to seven years ago. So what’s happened in six to seven years? Requirements have changed. The economic circumstances have changed in governments and so on. If you don’t build in that flexibility in the relationship, then the whole thing becomes… You know, you can’t document it in a contract, so ultimately, the more you put contractual and the more you go one-sided, the more difficulty you ultimately create in that relationship.

There are a lot of observers that are putting a lot of importance on the Morecambe Bay go-live because of the payments that trigger and the deadline that supposedly is out there from NHS. Do you think that’s overestimating the importance of what’s going on there?

Morecambe Bay is going to go live. No one is suggesting that Morecambe Bay is not going live. The go-live in Morecambe Bay — and I really get a bit sensitive about this, particularly because the contractual arrangements — but you’re talking a very technical integration program where a lot of historical data on all systems has to be integrated into the new systems and training has to occur with a lot of people and so on. The last thing you want to do is go live and not have a successful integration.

In any program, or any delivery, if it was slipped today, a week, or even a month, everyone would say, “Well, OK, that’s not the end of the world.” But what happened is that Christine put a date there that was a bit of a mark in the ground for the go-live of Morecambe Bay for Lorenzo, with 1.9, in terms of importance for CSC.

I think leaving aside whatever contractual arrangements CSC has agreed with the NHS or not, the real issue is if Morecambe Bay are happy with the solution, which we know they are, and they have been testing it in their environment now for a number of months, and they’ve also been using the older version of Lorenzo. If the trust has made a commitment to go-live, which we know that it has, the fact it might be delayed by some weeks is leaving aside what contractual arrangements exist between CSC and the NHS because of the payments, that is not in any way a train wreck.

OK, yes, I would have preferred it to happen earlier, but the fact is that we are talking very groundbreaking and new technology at the same time, in a complex integrated trust environment. The one thing I can assure you is that, technically, the solution is working and delivered. So technically, the go-live has occurred in every other environment in the primary care trusts. So, it’s going to happen. I’m sure there’s going to be a fair degree of political emotion around it and rhetoric, for the reasons we’ve discussed earlier.

Of course, your company suffers from that because shareholders look at the uncertainty there and know that you’re a major player and significant part of the revenue and would have some concerns. Is there anything you can do to reassure them, or are they mislead into thinking that it’s that important?

We have to understand, number one, the NHS program represents today, for us, less than 20% of our total revenue. It’s not our total business. It’s a significant part of that business, but we’ve 80% of our business, and actually more than half of the UK revenue has got nothing to do with the national program. We’re quite a major player in quite a number of things, so I want to at least put it in context. But notwithstanding, you’re 100% right. There’s a lot of focus, there’s a lot of attention, and it gets a lot of air play even if it is only 20% of our total revenue because it’s seen to be a major growth engine and potentially, if it doesn’t work very well, a potential risk factor.

The reality is that in some ways, fortunately for me, my year end isn’t up until 30 June, and a lot of the major things and milestones and deliveries are all scheduled to take place between now and 30 June. I am not in any way looking to stress out or think that our investors should be stressing out. But unfortunately, there’s a lot of people who make lots of noise, and sometimes you just have to allow that noise to occur because what can you say other than to let the facts speak for themselves. Sometimes that just has to be the time period.

When you look ahead for five years or so, what are your plans for the company?

I think that we sit on an enormous potential for delivering health solutions, if I could call it, across the health continuum, in that we think that with the pool of intellectual capabilities that sits in our organization, together with the product know-how and technology that we’ve invested in, with what we can potentially create through building upon that. I think that we can be a world leader in healthcare IT and span the globe. Not just in the 40 countries we do today, but in a broader number. But also, be far more significant in some of those countries where we would obviously like to have significant influence, which hopefully must mean — we would like to think — that by that stage we would be a substantial player in the US market.

I think that the opportunity to achieve that means that we will need to grow and hopefully that growth will be commensurate with a very substantial profit returns to our shareholders and those who surround it. That’s certainly our aim, that’s certainly our intention, that’s certainly our desire. I think we’ve got a good team of people around us to help us achieve that. Even though we have challenges at the moment, and we’re not in the US, and also we’ve got the uncertainty around the national program, I think over the next 6-12 months a lot of that, I believe, will be behind us. I think that will really enable the company to propel its success further.

An HIT Moment with … Jeffrey Levitt

March 31, 2010 Interviews 2 Comments

An HIT Moment with ... is a quick interview with someone we find interesting. Jeffrey Levitt is chairman and CEO of Precyse Solutions.

What are the key issues involved in moving traditional HIM departments to paperless and EHR-based operations?

Without a doubt, physician adoption. Physicians want to focus on delivering quality care and avoid spending time adapting to a new system or altering their workflow.

There are many issues involved in the transformation to a paperless EHR environment. However, we often receive questions about how to manage the changes that people will have to go through. They must re-think their workflows, processes, and tools that have changed as a result of the investment in the EHR. Many have a hard time giving up things that they understand to embrace change and something new — knowing these new paperless systems may potentially result in job losses in a difficult employment environment.

Coupled with change management tasks are training and conversion issues required under the new systems and workflows. For example, to move to a new dictation and transcription platform or automated coding platform, transcriptionists, editors, and coders must receive additional training and education. At the same time, the basic core HIM functions and processes must continue, otherwise the revenue cycle will be disturbed and billing and collections will be delayed. An efficient and streamlined conversion strategy, reinforced with proven implementation methodologies, is required to minimize disruption while existing HIM employees are learning a new set of systems and procedures.

How can speech recognition be used to turn provider dictation into electronic documentation within the EHR?

Acquisition of data directly from clinicians remains one of the largest obstacles for EHR adoption and information sharing among facilities. This is caused in part by the difficulty of capturing data in a structured format. Many physicians are reluctant to document patient encounters in a structured format directly into EHR systems because they believe it will require more time, more hindrance to their established and desired workflow.

Recently, new technology has emerged with potential to bridge the gap between dictation and structured data entry. Solutions have moved from speech recognition to speech understanding, a more suited concept for the EHR decade, which allows physicians to continue documenting clinical information efficiently via natural language, which is analyzed and processed into a structured narrative in real time. A structured narrative fuses unstructured text; gross document structures like sections, fields, paragraphs, lists; and individual concepts, their modifiers and relationships — all of which are encoded using standard medical terminologies and nomenclatures.

Precyse is pleased to incorporate the M*Modal Speech Understanding technologies in our transcription platform. Utilizing the business logic in our workflow platform and M*Modal’s continuous learning process, speech profiles are established with a new physician’s first dictation, and drafts rapidly improve with continued use. Today, over 80% of our total physician dictations are seamlessly converted into useable drafts, significantly improving transcriptionists’ productivity and providing faster document turnaround. The benefits in accelerating the document generation improves communications between caregivers, can expedite the admissions and discharge processes, and accelerates the billing process to reduce DNFB, to say nothing of the increase in physician satisfaction and adoption.

What coding and documentation issues are currently challenging for HIM departments?

Almost every hospital we encounter has a shortage of qualified coders. Without the ability to code and process charts on a timely and accurate basis, the revenue cycle is disturbed while billing and collections are delayed. At the same time, medical coding is getting more complex because of new medical technologies coming online, changes to the rules of coding and coding specificity as required by MS-DRGs.

Other problems coders encounter are incomplete charts, or documents that do not contain appropriate detail. Because, to a physician, the primary purpose of clinical documentation is continuity of patient care, charts and records are often not prepared from the perspective required for properly coding provided services. With these complexities, the resulting lack of accurate and complete documentation presented to coders can result in the use of nonspecific and general codes. This impacts data integrity and reimbursement and presents potential compliance issues and recovery audit risk.

To mitigate these risks, coders have turned to time-consuming querying to clarify documentation. According to one of our clients, some of their facilities have seen up to 50% of charts submitted to coders result in needing a query back to the physician, further delaying the billing process.

Remedying this problem, many providers have looked to outside help. Experienced coders can be brought in on a contract basis, or even work in a remote setting to ease the burdens on in-house staff. Providers can also contract for coding auditors and educators, and clinical documentation specialists to work directly with physicians to help them understand the difference between clinical documentation and reimbursement documentation.

What tips would you offer for coding audit and compliance?

We urge our clients to invest in training for their coders, and are glad to assist them with the coding education function. We make a vast majority of our internal continuing education materials available to our clients, as well as our de-identified charts for coding practice and education. In those hospitals where we have responsibility for the coding function ourselves, we conduct regular mock audits in addition to our own efforts to identify improvement areas that need to be strengthened in our processes and training. We also build continuous improvement plans into our standard methods of operation. Finally, our Compliance, Privacy and Security Officers spend a lot of time in new colleague orientation and our internal compliance program ensures that we maintain and enhance our own focus on compliance.

How do you see the roles and responsibilities of the hospital HIM department changing over the next five years?

Because more hospitals will be purchasing and deploying more sophisticated EHR systems over the next few years under HITECH, many of the clerical functions will be reviewed and rethought around absence of the assembled paper chart and the introduction of the electronic record. In multi-hospital systems with size, scale and resources, these groups will begin to use the experiences they’ve gained from the regionalization and centralization of their business offices to do the same with their medical records and HIM departments. While there must always be some on-site HIM professionals to handle interdepartmental communications and address physician and patient requests for records, many of the professionals who had formerly been part of the more labor-intensive, paper-based environment at the site of care will find that their jobs have been physically moved to more centralized offices, or to their homes.

Likewise, some of these functions will have been re-engineered for greater efficiency and productivity. We also anticipate the creation of new HIM job categories for many of these workers as we begin to understand how to better extract data from the EHR systems to provide more automated reporting that will be required in our new environment. So, it wouldn’t be surprising to see whole new categories of HIM workers beginning to assist in the preparation of decision support tools, pay-for-performance and other quality reporting information, or aggregate patient information for other uses in the health care system.

It will be a very exciting decade for health care information technology and management, one that will resemble nothing of the past decade. We can thank advancing technologies and mastering new workflows for this anticipated transformation.

HIStalk Interviews Mike Cannavo

March 17, 2010 Interviews 5 Comments

Mike Cannavo, aka The PACSMan, is founder and president of Image Management Consultants.

Give me a brief history of PACS.

Well, for one, PACS finally works, so that’s a real good start. [laughs] Technology has finally caught up to the promises that were made over two decades ago about PAC systems allowing “any image any time, instantaneously,” although we’ve also become a lot more realistic on how we define instantaneously as well. Customers are also becoming much more educated, although the information they get from vendors is often biased towards a particular vendor’s PAC system. Unfortunately IT has very few resources for information, as most of this as been geared towards radiology.

Why is that?

Until a few years ago, radiology departments – heck, nearly all departments in the hospital for that matter — pretty much operated in a vacuum and made their own decisions. Now with the ultimate goal to have an EHR/HIE established by 2014, IT plays a much more important role in the decision-making process. Radiology still gets to call the shots on what works best for it, because like it or not, PACS still is a radiology-centric system, but IT needs to make sure it works well and plays together well with all the other clinical systems.

There aren’t a whole lot of IT specific resources available, but IT can get educated by going into the radiology community. I’ve had two series on Auntminnie.com titled “PACS Secrets” and “Building a Better PACS” that can be a good starting point. Start with the article titled “PACS and Marriage” and move on from there. It will no doubt bring you and Inga much closer together. [laughs]

Aunt Minnie’s PACS discussion forum is also an excellent resource. SIIM and the AHRA both have some incredible educational resources as well, although you usually have to be a member to access them. Many vendors and even HIMSS have begun virtual education using Webinars, though some are nothing but thinly veiled sales pitches — you have to look very closely at content. I also like Doctor Dalais’s blog as well – he is about a reverent as I am. [laughs]

What about RFPs?

RFPs are dramatically overrated. Now my fellow consultants might hate me for saying so, but a good technical spec that provides the vendor with a baseline to respond to send out to two vendors is about all you ever need. It need not be longer than six to ten pages tops and just needs to outline what you have, what you are looking for, and statements of a similar ilk.

I’ve seen RFPs that read like a New King James Version of the Bible and others that give so little information that they redefine worthless. One of my counterparts actually commented once: “Nothing like a meaty RFP to establish your creds” and I’m thinking, “For whom?” Shorter is always better with an RFP as long as all the information is there. Learn to KISS — Keep It Simple, Stupid.

Interestingly enough, vendors respond to 10-page and 100-page RFPs using the identical templated responses as well. That’s part of the problem we face today. Too many RFPs are being issued and questions are being answered without the right questions being asked orR answered, with the way the vendor answers more important that the question itself. A vendor also isn’t going to rewrite their DICOM conformance statement because you asked for something they can’t or don’t provide. But it is a nice try.

Is writing an RFP a mistake?

Not really, but it has also been my experience with any RFP — be it for a PACS, RIS, VNA, or whatever –  that what you see isn’t always what you get. Asking questions, even multiple choice questions, can still get you answers that don’t really address the client’s needs. A solid contract is much more important than the RFP.

Case in point: I recently had a client who did their own PACS RFP, a rather extensive, exhaustive document literally hundreds of pages long, with over 50 pages dedicated to the archive alone. The system they were buying was exceptionally large, addressed many sites, and cost several million dollars. I was engaged simply to do the contract review for them. When I added contract language relating to the archive being "vendor neutral" and containing nothing proprietary in it, the vendor balked.

We went back to the RFP response the client developed, and while the questions about the archive were properly asked, the way the vendor responded made Fred Astaire look like he had two left feet. Basically while they said they could do it that way they never said they did do it that way — and therein lies the issue — how you interpret a response? That is why I say for the most part, RFPs have very limited value. The contract is what you have to go to court with if need be, and is what needs to be made crystal clear.

Very few customers are also qualified by the vendors as to their readiness for PACS before the go into the RFP process as well. And who pays? The facility, by having to dedicate more internal resources on the project than they need to and also paying a consultant to go over mostly superfluous material. What you see is what you get- recognize that and you’ll save a bunch of time and money.

What if IT ran the RFP process?

It would only make it worse, in my opinion. IT understands IT and radiology understands radiology. This needs to be a team effort with everyone in the hospital working together.

The last project I got involved in where IT was in charge, I was ready to pull my hair out. The person overseeing the PACS evaluation process with IT in charge came to the facility from a Big Six consulting firm and had virtually no understanding of radiology. Process, on the other hand — my God, this person had process down pat. There was constant talk of putting information into this bucket and that silo. I felt like I was living in Hooterville and waiting for Lisa Douglas, Mr. Haney, Sam Drucker, and Arnold Ziffel (the pig) to show up.

We spent nearly a year going through a detailed highly scientific — by Big Six standards — statistical analysis of each of the four vendors being looked at closely, only to have the statistical difference be <0.02% between the top three vendors. I think four points separated them all out of 800+ possible points. But, by God, we did it scientifically. The funny thing was I told the radiology administrator this would happen before we even started, but alas, her hands were tied. This facility wasted over $100K in internal resources internally and nearly a year’s time getting back to their initial starting point.

So, no, IT should not be in charge. Again, this needs to be a team venture.

What are IT’s biggest mistakes relative to PACS?

Where do I start? Probably treating PACS the same way their do every other clinical system, although obviously there is some overlap. Radiologists need to feel comfortable with the workstation operation, so regardless of what IT thinks about the system, if the radiologists don’t like it the way the workstation operates or if they feel it will slow them down, they just won’t use it. And if they don’t use, it then that is just throwing away good money after bad. While no one part of the team should have over 50% vote, in the final decision-making process the system must fit the radiologists so unless you plan on changing radiology groups soon their vote means a lot.

I’ve seen a lot of mistakes made over the years, but thankfully, most were recoverable. One of my favorites was a CIO who insisted on entering into contract negotiations with two vendors at once. I said, “That’s not how it works in PACS” but we butted heads here big time. His thought process was that inviting both to the church it would make each work harder to be competitive.

In the vast majority of cases. this backfires big time. This wasn’t like The Bachelor where Jake had to choose between two of 16 beauties — will it be Vienna or Tenley? — to put a ring on their finger. This CIO wanted to take both to the altar in their gowns, bridal parties and families in tow, with the preacher looking at the groom asking, “Which of these women do you take to be your lawfully wedded wife?” Everyone will be shedding tears, but not everyone tears of joy. And I’ll be sitting there thinking, “Now what do I do now with this toaster and blender I got them as gifts?” That is such a waste of everyone’s time and money, but I, alas, didn’t call the shots.

What did you mean by customers being qualified?

An RFP or tech spec should never be put out on the street unless adequate monies have been both approved for the project and are available for release and a project plan has been developed. Most sites think nothing about putting something on the street as a feeler to test the waters on PACS costs. Unfortunately they either don’t realize (or don’t care) that responding to an RFP costs the vendors anywhere from $4,000-$10,000 per RFP in manpower costs alone. Following it up with on-site visits, customer site visits, etc. adds another $15-20,000 over the project term. So, you’re really looking at $20-30,000 for each RFP that is responded to. If a company has an outstanding track record, they stand to close maybe one out of three RFPs they respond to, so the first $60,000-90,000 of any PACS sales should be considered make-up revenue.

Maybe that explains why $15,000 workstations cost $85,000.

That’s a large part of it, but research and development and software application costs add to that bottom line cost as well. Vendors also need to make a slight margin on the sale too, but that’s all negotiable. [laughs]

Consultants don’t come cheap either, right?

The oats that have been through the horse come somewhat cheaper. [laughs]

There is a plethora of consultants out there today, many whose ink is still wet on the business cards they got at Office Max when they got laid off and figure, “If he can be a consultant, then by damn I can too.” Unfortunately, even the societies that deal with radiology and PACS that are supposed to look out for you don’t. All you need to do to be listed on most of these sites is join their organization or pay a monthly fee to be listed. Now there are disclaimers listed, but who really reads them?

Truth be known, there are less than a dozen of us who do PACS consulting on a full-time basis, not as a sideline business when we’re not out looking for a full-time job with a steady salary. But no one knows who they really are, so we all get a bad name when someone screws up. It’s the same way with IT consulting as well. That’s why I always talk with a client at length about their needs and the project before I even consider an engagement.

Three out of five potential clients who call me get their questions answered within the first hour of a phone call. We do that for free. One out of five potential clients I find I just can’t work with. They want to show me their watch to tell them the time. The remaining one out of five I end up engaging with in a project.

Sounds hard to make money turning away 80% of your prospects.

What we lack in financial input, we make up in volume. Eight customers a day and I can almost pay the phone bill! [laughs]

More than 90% of our end-user business is doing our quick and simple PACS Sanity Check. Most places are pretty sure what vendor they want or at least have it narrowed down to the two they want, so we look at the proposals a client has from the vendors, make sure they are indeed apples to apples comparisons and, if not make, sure they are by having them re-quoted, discuss the pros and cons of each proposal with the client, and then, once the client selects their vendor of choice, we help them with contract negotiations since the contract is unquestionably the most important part of any deal. No muss, no fuss, two weeks and $5K or less and they and we are both done. While all PACS projects are different, most of the things you do are the same and become templated, so why charge people out the wazoo to reinvent the wheel?

Because you can?

Hey now — you calling me a vendor? [laughs]

You’ve worked on a ton of PACS projects. How did they differ?

That’s like asking me how many women I dated in my youth that were different or what makes Inga so special. Why, everything about her, of course. [laughs] The answer, obviously, is all women are different, and while each has their own unique advantages and benefits, each also shares many common traits. The same holds true for PACS. All are different, but not necessarily from a system design standpoint. There are maybe a dozen or so templated system designs that vendors start with and then customize accordingly.

The politics or each site varies widely and is probably the most important issue to address. Frankly, designing a PAC system is like choosing from a Chinese menu — workstations from column A, servers column B, archives column C, etc. Put them together and you have the system design. Making it work is another story, although if you talk to vendors and customers alike, it all works together like magic, just like DICOM is magic and HL-7 is magic. Poof, it all works. Plug and pray, I mean plug and play. [laughs]

The concept of standards is advanced, but the reality is so far from the truth it’s not even funny. DICOM is the most non-standard standard ever developed, so much so that every vendor has to offer their own conformance statement  — this is what we agree to, this is what we don’t. Two vendors can consider themselves as DICOM-compliant, but if they don’t share conformance statements, it won’t work.

IHE, Integrating the Healthcare Environment, is the same way. It’s a great concept, but the execution leaves much to be desired from an ease of implementation standpoint. That is why very few vendors have adopted IHE. Ask around and you’ll see. That is also why VNAs, Vendor Neutral Archives, are growing in leaps and bounds.

It all seems to work at RSNA.

You also saw perfect images at RSNA and software that won’t be available until 2012 at the earliest. Anyone who makes the trek to Chicago knows RSNA is an acronym for Real System Not Available. Seeing it working and knowing what it took to make it work are two entirely different concepts. Yes, it all did work by 10 a.m. Sunday, but rest assured, it wasn’t all plug and play. But we are getting much better.

So PACS implementation schedules aren’t real?

I never said that. Just that implementation schedules are an approximation of when you can expect it to be in, not an etched-in-stone date and time. There is so much that can go wrong that you never expect.

I recall one engagement we did where we lost a month for one 2” hole in a wall. The problem was that none of us knew it was a fire wall. By the time we got the 16 different approvals needed to drill this silly hole, we lost 30 days. There are all sorts of challenges like that. Radiology Information System integrations also have their own set of challenges, requiring the RIS vendor, PACS vendor, and IT department all to be on the same page timetable wise. That almost never happens.

But it could.

When the moon aligns with Jupiter and men finally understand women, then, yes, it can happen, but you stand a better chance of winning the Lotto or Powerball than that happening anytime soon.

Managing expectations is a huge part of PACS. Unfortunately it’s also one of the biggest areas of failure that the industry has nourished.

PACS has been promoted as a cure-all for everything under the sun, but knowing what PACS can and can’t do is paramount to gaining wide-scale, facility-wide acceptance of PACS. Unfortunately there is so much misinformation about PACS that people have a hard time believing the facts.

I go crazy when people say PACS can reduce FTEs. Technically the FTE headcount may be reduced in the film and file room after a period of time, but the overall FTE budget ostensibly will remain the same. If the money coming out of your pocket isn’t different, then what’s the real benefit? The same holds true when people talk about time savings with computed and digital radiography, CR and DR, over analog film. Yes, the imaging process can be reduced by 40-60% over conventional film, but will you see a 40-60% reduction in FTEs or rooms required? No.

In generating a typical 10-12 minute chest film, you save the time associated with film processing and jacket merging, about two minutes on average. All the other processes — getting the patient in the room, positioning them, and even imaging them — remain the same whether analog, CR and DR. Ever been with a geriatric patient? It takes longer to get them in the room and positioned that it does generating the film, and that’s after telling him three times, “Turn to the left Mr. Jones — no, your other left” and then have him ask why he has to have this %&^*% x-ray when his #%^#&^ doctor doesn’t know what the ^&*(%*$ he’s talking about. In my next life I’m going to be a gerontologist. [laughs]

You’re on a roll.

There is massive confusion about CR vs. DR as well, and “pure” digital vs. analog vs. digital conversion. In a properly designed department, the difference between using DR and CR is about 30 seconds per procedure on a bad day. From a price standpoint, however, the differences are huge.

A single, moderate throughput CR reader costing $120K complete can be shared between two rooms, while DR is dedicated to a single room at a current cost of over $300K. $60K cost vs. $300K cost is a no brainer. Don’t have PACS? Even better — stick with film because your equipment cost is probably already fully amortized and we’re not Japan where they pay a premium for images generated digitally. The reimbursement for a general radiographic exam generated in analog or digital form, CR or DR, is exactly the same here in the US and it takes a whole lot of cases at $1.50 a case in film costs to justify either imaging modality on film costs alone.

I hear my detractors now — blasphemy! Burn him at the stake! What about tech costs? My answer: what about them? If you do two procedures per day or 30 procedures per day, one technologist should be able to handle it all. Volume increases? General radiographic procedures are generally declining in most facilities at a rate of 2-5% per year, with CTs taking their place. Even where there is growth, it’s in the low single digits. So why do you even need CR or DR?

There are a variety of arguments for CR in a PACS environment, but for DR to succeed as well, the price point needs to be comparable to CR or at the very most no more than 15-20% higher. Either that or HCFA needs to start reimbursing for digital radiographic procedures over analog, similar to what they are doing now with digital mammography.

You done yet?

Just call me Howard Beale. “I’m mad as hell and I’m not gonna take it any more!” [laughs] Ok, I’m done — for now.

What do you think are the biggest obstacles to future PACs growth?

Probably getting buy-in from all levels in a hospital. PACS is such a complex sale, yet more than 90% of the sales made today aren’t made based on what is the best technical solution for a facility, but based on political decisions. The squeaky wheel syndrome, so to speak.

Such as?

Ah, yes. Another thing no one wants to ever talk about in public, but we all know how politically correct you and I both are, so do I care? [laughs] We can spend days, weeks, months, or years doing technical assessments on a vendor, but if the chairman of the department doesn’t like the vendor you’ve selected, rest assured, it’s not going in. The same can be said for any number of key players on the “team” whose vote equals 51%. I’ve lost many a night’s sleep over situations like this one — not.

If that is the case, why even bother with an RFP?

I’ve asked that same question of my clients and was chewed out recently for asking if putting an RFP out on the streets was a CYA move for them. In hindsight, maybe I shouldn’t have e-mailed it, but … if the decision is made for a vendor already, let’s do a technical spec, send it out to the vendor of choice, and save a bunch of time and effort on our part, not to mention vendor’s time.

People will do whatever it takes to save their jobs.

Thank you for the reminder why I’ve been on my own for the past 25 years.

Are you this blunt with clients?

Clients pay me to provide them with informed, objective information and to get them the answers they need so they can make informed objective decisions. If they elect to make a decision that is politically motivated, that is their choice. I still get paid the same amount.

All I ask them is to give me the opportunity to protect them with a fairly tight contract so that when their choice fails, they have some recourse other than pointing to the consultant. Kevin Costner took the bullet for Whitney Houston in The Bodyguard. I’m not paid that much, nor are most of my clients as hot as Whitney either. [laughs] PACSMan singing: “And eye e eye e eye will always love you.”.

Isn’t it your job is to help them make the right choice?

If you want “the right choice,” call AT&T — I think they own the trademark on those words, although years ago back in 1986 one of my last “real” jobs, where I knew I’d have a paycheck from week to week, I tried to sell an AT&T PACS product called CommView that was anything but the right choice. My job and that of my counterparts is to get them information so they can make the final choice, not I. No consultant worth his or her weight in salt will make a vendor or product choice for the client.

Even if that choice is glaringly wrong?

Even if, in my not-so-humble opinion, it’s a wrong choice. There are no wrong decisions, just decisions whose outcomes you wish might have been different. I’ve dated enough women to know that. [laughs] The only wrong decision in PACS is not making any decision at all and playing catch up for the rest of your life.

I let my sons make most of their own decisions all the time as long as their lives are not in danger. It’s how they learn. Many are right. A few had outcomes that we wish had been different. We then discuss it afterwards what they could have or should have done it differently and what the outcome might have been had we done it differently.

Unfortunately, if you don’t go with what the department chairman wants or someone in administration wants, what was ostensibly the right decision will turn out as the “wrong” decision and can be just as devastating career-wise as blowing $2 million of the hospital’s money on a dead-end PAC system. End users need to take a combination of the Taco Bell and Nike approach — “Think outside the box” and then “Just do it.”[laughs]

Is there a  best vendor?

I wish there was a single vendor who was the best solution for everyone. I’d be working for them now. I wouldn’t last long in a structured environment, but it might be fun to try again.

Most vendors offer fairly solid solutions to customers’ needs. Finding sales reps who can properly articulate what those solutions specific to the client is another story, however. Vendor A’s products may “bring good things to light” but if they can’t articulate how their product meets their needs better than vendor B, vendor B will no doubt get the sale, provided of course that vendor B also has the political support behind them as well. There are no bad vendors or products — just lousy product specialists and sales reps and customers who don’t listen.

And politically correct consultants.

As Curly in the Three Stooges would say, “Why sointenly!!” I’ve had some great discussions with some on the men in my men’s group at church on how Jesus was both politically correct and politically incorrect depending on the situations He was in. I like to think I’m the same way. [laughs]

In this business, you have to believe in God because you’re always calling out His name in one way or another, thanking Him when things go right and invoking His name in so many different ways when things don’t. [laughs]Would you believe I run a sports ministry in my spare time for the past ten years now? If I were to hit the lottery, I’d probably be a stay at home dad with my sons, run the sports ministry full time, and do PACS consulting as a sideline.

HIStalk Interviews Charlie Harp

March 5, 2010 Interviews 2 Comments

Charlie Harp is CEO of Clinical Architecture.


Tell me about the company.

I started Clinical Architecture about two and a half years ago, with our focus being primarily — I always use the term “plumbers” of healthcare information. What I mean by that is having worked in the industry, both from an end user perspective when I did clinical trials and hospital labs, to when I spent time at Hearst Business Media at First DataBank and Zynx on the content side. I’d worked with a lot of vendors that worked with a lot of organizations and I really thought that by creating a company like Clinical Architecture, we could help be a catalyst to improve the effectiveness of the implementation of content in the healthcare environment.

With Clinical Architecture, we started out doing mostly consulting, where we would work with content vendors and system vendors and end users to really focus on the problems they were having either with content design, integrating of clinical content or terminologies into their environments, or helping to manage some of the unexpected aspects of working with clinical content in a live environment. As we went through that process, we started seeing the same patterns over and over again. What we do as a solution provider is we try to provide what I call “plumbing solutions” to help with those common patterns of disconnection or dysfunction.

Most doctors do not find clinical warnings useful, even overriding most of the allergy warnings. How can that content be better used?

From my perspective — and I’m a simple country programmer, I’m not a clinical person — I’ve just been in this field for a while and I’ve seen a lot of dissatisfaction. I think a lot of it revolves around the fact that when you look at the content we have today and you look at the way that these healthcare systems have evolved, the content, when it was built and the structures that are still in use today, were built for a different audience. A lot of the clinical content that exists today, and a lot of the terminology, started out to support retail pharmacy, at least in the United States.

What happened is those content modules migrated their way into the inpatient and the practicing setting. It was put in front of physicians how a pharmacist, especially a pharmacist in a retail setting, deals with interruptions and deals with alerts. It is extremely different from how a physician does, or a nurse does, because their time constraints are very different. A physician is right there at the point of care trying to make decisions, and so if something isn’t really relevant, it’s not going to be perceived as useful.

I think that is where we are today. There’s another aspect there, too. Just to clarify, I think part of it was the electronic medical record was really not something that people felt was comprehensive. In fact, I remember speaking to a physician once who said that, “You know, Charlie, 40-60% of what I know about the patient is not in the computer — it’s in my head.” I think that’s been true for a long time.

I think that we’re at a point now where these electronic medical record systems are evolving. Whether they’re being driven by some of these incentives coming out of the Administration, or whether they’re being driven by just the need to resolve a lot of these medication problems or medication errors we’re having in healthcare — I don’t know, but I think the industry’s evolving. EMRs are evolving, and so that creates an opportunity for the clinical decision support content to evolve to be much more relevant. I think we’re just at the beginning of that era.

Do you think software vendors were too complacent in just letting the content providers tell them what they had and then just throwing it on the screen and calling it clinical decision support?

I don’t know that I would use the word complacent, but I think you’re right in that the content providers maybe didn’t have the content that the system vendors needed. The system vendor is in a Catch-22 because they develop this next-generation, cutting-edge system that requires a certain complexity and content to drive it and there’s nobody providing that kind of content. They’re on the hook to do it.

I’ve worked for content companies for a long time and it’s a lot of work. Building content and managing terminologies is definitely non-trivial. Clients might not like your content, so they might want to have the option to switch. You’re almost forced into a least common denominator position where you have to accommodate what’s available.

I’ve worked with a lot of content provider folks over the years. They’re all very noble and they’re trying to do the right thing and they work real hard, but when you try to introduce some new content, if there’s no place for it to go because no system is advanced enough to utilize it, that’s a Catch-22 as well. Developing new content is a non-trivial investment, and developing systems that are being driven on new content that doesn’t exist is a non-trivial proposition.

I think between the two, that’s where we’ve been stuck for a little while. I’m hopeful that a lot of the changes that are happening in healthcare today around improving the medical record will be the tipping point that we need to move forward.

I’ve always argued that there’s an attitude built into clinical decision support that says physicians can’t be trusted to determine what they need or find useful. What do you think would happen if individual doctors had the option to either detune the warnings for themselves personally, or to turn them off by saying, “I don’t want to see any more like this”?

I think that there’s nothing wrong with that. Once again, I’m not a clinical person, but I think that a lot of the clinical decision support is local. I think that the objective of a content provider is to provide a framework and a starter set and having the ability for the local population to tune certain things out. I mean, when you look at clinical alerts, there’s certain things that should never be done. Whether or not people should be allowed to turn that off and then woe be unto them if it results in something negative.

I guess that’s the option, but I think that the first thing we need to do in clinical decision support is make alerts that are much more contextually aware, because I think the physicians will be less likely to turn something off if every time they get an alert it was relevant to their patient and to what they’re doing. The other thing I think happens sometimes is you’ll have people that turn alerts off — so they give the physician the ability to just turn of a particular alert — because it’s not relevant to that patient, but it might be relevant to the next patient.

I think a lot of it has to do with our ability to apply alerts in a granular and effective way. If we can get better at that, then the physicians will be less likely to turn it off because every time it fires they say, “Wow, I’m glad I got that alert.” I don’t think that is the reaction you get today.

Do you think the content providers are concerned about their own legal liability, or some FDA interpretation that maybe they’re offering what sounds like a medical device? Do you think their incentive is to just alert for everything because not doing that could get them in more trouble?

I think that whenever you’re dealing in this space, there’s concern about liability. I think for some people, to err on the side of over-alerting in order to protect yourself. I don’t know that I would say that’s necessarily the case, but I think that part of the problem is when people create clinical content, they’re getting the content from somewhere.

For example, if I’m going to the package insert to drive all my clinical decision support, well, the package insert is really something that is designed around liability. So, one could argue that if something’s purely based on the package insert, it’s going to be alerting more often than not. I think the issue though, is I think it really has to do with what sources are available for information.

I remember once I was somewhere and somebody really wanted accurate pediatric dose checking and for neonates and for preemies. The problem, of course, is a lot of the data you get to drive clinical decision support comes from human trials and comes from case studies. Not all populations have case studies, and so sometimes it’s hard to come by really good information. So then you start leaning more on expert opinion, and sometimes people are concerned about how their advice is being carried forward into the point of care setting. But just in general, I don’t know that I would speak for how the content providers are positioning themselves in their content.

Given that whole emphasis on some grading of the studies and weighting of the evidence and the severity and all that, do you think there are opportunities for individual providers to create their own content based on local experience? In other words, I go to a doctor and say, “Here are some things we might do. Tell me what would be important to you in your practice.” Or is there any new use of content that isn’t the same old stuff? Is anybody doing anything creative with it?

I think what’s going to happen is … I mean, even when I was in my last job at First DataBank, we were seeing a lot of people who would use our content as a starting point, but they would also have you build and localize it. We actually built tools around localizing, and I think a lot of the system vendors and content vendors are moving to accommodate that.

I think it has to do with the fact that if you’re at a large teaching institution, you might have some additional rules in clinical decision support that you want to use to augment what’s coming from your vendor. You might also want to be able to turn off some of the things that are coming from your vendor because you just don’t agree with it. I think localization is something that’s going to start to happen.

But the other thing I think that’s going to start to happen is, as we improve the plumbing and we lower some of the barriers that the terminology creates, I think it creates an opportunity for other folks with new content to put that content out there; whether they’re a university or whether they’re a particular collaborative group. To put together content and make it available to people.

We’re not there, I don’t think, today. But I think that’s something that could happen in the future. I do believe that clinical decision support is local. In fact, when you look at the ARRA Meaningful Use, I like the introduction of the five rules because it almost says, “You really should be thinking about local and what kind of local things you need to look for.”

There was that point, maybe ten years ago, when Eclipsys bought the BICS rules from Brigham and Women’s. That never seemed to go anyplace and neither did the idea that there’s some open standard where individual providers can exchange Arden Syntax or whatever they’ve written their rules in. Is that ever going to catch on, that people will use rules that were created elsewhere? Or share rules via some sort of exchange process?

I’d like to see it happen, but I’m not sure it’s going to happen in the public domain. The only reason I say that is, building content … healthcare is constantly changing and building content is not easy. I really think the public sharing of information without some way to fund it and some dollars to help cover the costs of the content’s development, whereas I think it’s a great idea and I would like to see it happen, I just don’t know if it’s financially feasible. What happens is you create content and that content has a half-life and you need to be updating the content, and you need to maintain the content.

The other problem is if there’s some kind of sharing environment that’s driven by Sponsor A or Vendor A, Vendor B, if they’re a competitor, is not going to want to promote that and be compatible with that because it’s their competitors offering a standard. In fact, with Clinical Architecture, one of our goals was to be a neutral plumber to maybe help bring about a time when people can have general plumbing for sharing clinical decision support content without having to worry about their competitors and whether or not their competitor is behind it or not.

You mentioned the Meaningful Use proposed criteria. Do you think those proposed criteria and the reimbursement guidelines say enough about clinical decision support, and also the specific taxonomies and the architectures that they’re suggesting that is needed to go forward with it?

I think that they started us down a path where just saying that Meaningful Use means that the computer needs to do some of the heavy lifting, is really the way I read the direction of Meaningful Use. I think that there are things that computers and software are really good at when it comes to minutiae and tracking details and correlating data, as long as it has meaningful information with which to do it. I think that moving in that direction is good.

There are a lot of different definitions for what clinical decision support is. So I’ll be kind of narrow and say, “computerized rule checking around patient context to provide advice and alerts about something going on with a patient.” I think that there’s still a ways to go to evolve clinical decision support to be meaningful, because I think if we just perpetuate what we have today, you’ll still have people who get frustrated and turn it off.

I think when it comes to terminologies — I think that we’re a ways off from getting to the point where we don’t have multiple terminologies out there. Part of that is because of just the rate of change in healthcare is slow, and a lot of it has to do with the criticality of these systems. You can’t just unplug one and plug another one in, as I’m sure you well know. When you introduce something new, it takes time.

A lot of these third party terminologies are pretty ingrained; whether they’re a local terminology that someone builds, or whether they’re one they bought — they’re pretty ingrained. So, getting them out, if that ever happens, will take a while. The standard terminologies that are promoted in ARRA, whereas I think they’re a good start, need to evolve to really meet the demands of what we’re going to be asking them to do, as well.

Are we’re relying too much on content that was designed for billing, like ICD-9 and CPT codes, and trying to make it into something it never was intended to be?

Oh, I think so. I think that when you look at the general codes that are used most in healthcare today, the CPT, the ICD-9, and in the drug world, the NDC code — those were all really built around transactions and billing. But that’s kind of our roots. That’s where the US healthcare system came from is transactions and billing. What we’re trying to do is evolve into systems that are based upon a clinical framework.

If you look at other countries where they didn’t come from there, it’s very interesting to see how those systems have evolved because they look different than the systems we’re used to here in the states because they don’t have those aspects. Whenever you try to take data from the states and introduce it into those other countries they go, “Well, why does it look like this?” It’s because of where we came from.

I think that the danger is when we create new terminologies that are designed to be clinical terminologies, we need to evaluate the characteristics of those terminologies. I’ll give you an example, and I think I have a blog entry on this.

ICD-9 has a bunch of concatenated terms in it and they’re conditions or problems that are put together in a particular order because it facilitates billing activity. Well, when you look in SNOMED, we’ve started introducing those same types of concatenated terms. The problem is the concatenated term is very difficult to leverage when you’re doing clinical decision support because you’ve created a term with multiple meanings. Any time you have a term with multiple meanings; it results in things getting a little fuzzy.

It almost looks like, in some of these cases, the SNOMED CT codes — and I don’t know this for a fact — but it looks like they’ve shadowed ICD-9 to a certain degree. If SNOMED is going to be our clinical terminology, we shouldn’t be doing that. We should be saying, “Well, we want to treat this differently.” But it’s difficult because you can’t be precognitive and know everything that’s going to happen. There are a lot of unintended consequences when you create a terminology, but I do think that a lot of the problems we have stem from where we’ve come from with these terminologies.

You would think that the one giant test bed that’s out there that could maybe take a different direction would be the VA, since they don’t worry so much about billing and they’ve got clinical systems that are mature. Are they doing any work that would be innovative that might find its way into the average hospital?

They may be. I haven’t done a ton of work with the VA in these last couple of years, so I don’t know that I could give you a meaningful answer in that respect. If they’re not encumbered by some of the issues that a lot of these other systems are.

I actually think a place where things would be interesting is some of these newer systems, especially some of these Web-based systems that don’t come from a background of being charge capture systems. That’s one place to look. I think looking at the systems that are coming from other places, just to see what patterns they’re using to capture and deal with clinical information, I think, would be interesting.

Is metadata and semantic interoperability the next hurdle?

I’ve spent a lot of time in the last year working on interoperability, and I think that there’s a lot we can do with interoperability to get us to a next evolutionary step. I think, ultimately, if you have systems that are using whatever code they’re using locally and are loosely coupling to a standard for data exchange, interoperability becomes a lot easier.

For example, if I’m using content vendor code A or my own local codes, but in that dictionary in my system I also have the RXCUI. When I go to exchange data, if RXCUIs are comprehensive from RxNorm, then I can exchange data meaningfully. I think that between those points, in our utopian future and today, I think that there are some roles for metadata in understanding and correlating terms.

I think that the UMLS Metathesaurus and RxNorm are a good start, but a lot of people go to them thinking they’re the ultimate key to fixing the problem, but they’re really not, at least, not yet. It’s one of the reasons why we built our SYMEDICAL product. We were working with a bunch of clients and seeing the same pattern over and over again where they’re trying to exchange data.

There are really two problems today. One is to map from one terminology to another is an extremely manual and time consuming process. Because of that, the second problem occurs; which is if there’s something you don’t want to do because it’s long and painful, you put it off. So a lot of people will build a map and it’ll cover a certain percentage of the things they’re trying to exchange, and then two years later the map is severely out of date because code sets are constantly changing. Much more so than people think they do.

One of the reasons we built the product we built was to let the computer do some of the heavy lifting. It really leverages metadata and contextual parsing to do that, and it actually does a fairly good job. I think that when it comes to interoperability, having tools out there in the public domain like UMLS Metathesaurus and RxNorm help. Having tools that help domain experts at institutions, or vendors and facilitates the mapping process also helps. Then staying on top of it and not letting things go for a year before you get it updated again, especially now that we’re leveraging the electronic medical record and really pushing it more and more.

Who is your customer, and has their interest changed with the emphasis on clinical systems and Meaningful Use?

Our customer — like I said, we’re kind of a across the field. We work with content vendors, we work with system vendors, and we work with end users. But when it comes to the changes I’ve seen in the last year because of the HITECH initiative, I think that a lot of it has to do with it drives or prioritizes certain things to the top. There are a lot of things we should do as a healthcare industry. There are things that we should do, and then there are the things that someone’s willing to potentially pay us to do. What I think has happened is number one; raised more awareness, and it has given people an incentive to accelerate the process, which I think is good.

For example, we also help people implement content and get decision support up and running. In the last year, we’ve seen a lot of people who previously didn’t do decision support, didn’t have structured terminologies, are now really feeling the pressure to do so — which is good, because before everything was free text, and if people wanted to do checking, they really couldn’t, at least not in a computer-assisted way.

I also think when it comes to interoperability; there are people who probably don’t want to deal with interoperability. This forces the issue, which is also good. We’ve still got to sift through and decide about standards and formats and all those things, but putting the notion of interoperability forward is good because ultimately, for me, it drives Clinical Architecture. It’s one of our primary goals, which is about making healthcare more effective and improving patient safety. With interoperability the way it is today, there’s a lot of manual reentry, there’s a lot of free text information, there’s a lot of potential clerical error and human error.

Whereas, if you can increase the fidelity with which systems communicate with each other, and those codes are being exchanged more accurately, you increase the efficiency of the people that are actually trying to manage the movement of data. You increase the accuracy of the data that’s in the patient record. Therefore, when you do clinical decision support, or when you’re looking at things about the patient, you’re actually going to have a better picture of what’s going on. No clinical decision support will help you if you don’t have any coded data that can drive it.

Any final thoughts?

There’s a lot of people that build really good systems, and there’s a lot of people that build a lot of really great content. What I try to do with my contacts in the industry is to push them a little bit and say, “You know, I think we could do better.” I think there are things we can do, and it’s not easy.

Another quick sidebar is I think that one of the things that people aren’t really ready for, which I’m not sure they are, is if we ratchet up the electronic medical record, there’s a lot of housekeeping. In the past, and electronic medical record for a lot of systems, was probably an episodic transitional thing where the patient comes in, they leave, and you might pitch the electronic record until they come back the next time and they fill out the form again. But once you start persisting that and you’re relying on it over time, there’s a lot of housekeeping that goes along with that. There’s a lot of cleanup because the codes are always changing behind the scenes. Information’s always changing, rules are always changing.

I think the next big thing that’s going to happen to the industry is once they really start persisting and dealing with the reality of the medical record that’s coded and structured, they’re going to have to deal with “how do I make sure that it’s accurate as time goes by” I think that’s going to take a village. That’s where interoperability comes in, because ultimately, your electronic medical record is not just your doctor’s office, not just your pharmacy, not just the hospital. It’s the meaningful combination of all those things put together. It’ll be interesting.

HIStalk Interviews Trey Lauderdale

February 25, 2010 Interviews 3 Comments

Trey Lauderdale is chief innovation officer of Voalté.


Tell me about Voalté.

We are a startup located in Sarasota, Florida. We develop communication software and solutions for point-of-care physicians. Our whole goal is to integrate voice over IP, alarms and notifications, and internal text messaging, all on the next generations of smart phones like the iPhone and other devices.

My background is with Emergin in the alarm management world. I worked there for a few years, primarily in sales. At Emergin, you view the world as input systems, or systems that send you alarms and notifications to your middleware server. Then, output systems, or systems that we can dispatch those alarms and notifications.

Just stepping back and being new to healthcare at the time, I looked at it and did an analysis of the market. What I realized was the input systems were really going through this growth period of somewhat revolutionary change where systems were becoming more and more advanced. If you looked at the infusion pumps from 5-10 years ago compared to infusion pumps that were being sold at that time, you have these smart pumps coming out that were producing more information. Physiological monitoring systems, nurse call — they were all becoming more advanced.

We received all that information, and there was more and more information we were receiving, but then I’d look at the output systems and the phones that the nurses, the docs, and everyone else were carrying. A lot of times, we’d go in and we’d be integrating this unbelievably complex physiological monitoring system to a pager and it was just a line of text coming across, no variable ringtone, or, these legacy kind of either DECT phones or voice over IP phones.

The nurses and the docs would keep complaining and saying, at that time, “It’s 2008, why do we have to carry this bulky phone, these antiquated pagers?” They would bring in their own personal PDA because they wanted to run all these different applications that were coming out on their BlackBerry or iPhone or other device.

In March of 2008, Apple released the iPhone software developer kit. It just so happened that same day I was at a meeting at Miami Children’s Hospital where the nurses asked me to come in. They were looking for a voice over IP system to purchase for their hospital. I did a quick presentation — these are all the devices Emergin can send to. There was just a look of disappointment on the clinicians’ faces and they weren’t happy with their selection.

That night, I went home and read about the iPhone SDK and a light bulb went off. Why can’t clinicians have one device, one of these smart phones, to handle all of their communication needs, built specifically for their workload model?

A few months later, I ended up leaving Emergin and creating and starting Voalté, August of 2008. At the time, it was me. I found a software developer and we started the company. We took the leap of faith and I quickly realized how unbelievably difficult it is to start a company. Of course, my luck — August 2008 was right when the economy completely tanked out.

I was talking to hospitals. I was talking to angel investors, venture capitalists, and everyone said, “We love your passion. We love the idea you have.” At the time, I was 26 years old. “We’re not going to trust you to start a company. You will not be successful. You’re not going to get this to work.” After about three months of trying to get things going, I was at a point where I didn’t think it was going to work. I was going to have to get a job. I had burned through all my savings. I was living off credit card debt.

Through a mutual contact at the Center for Entrepreneurship at University of Florida, I was put in contact with Rob Campbell. Rob is an interesting individual. He can best be described as a serial entrepreneur. Back in the day, he actually worked for Steve Jobs at Apple back in the late ‘70s. He was part of the marketing group and he helped build the entire market for Apple’s software division. He then left Apple and created a company that founded a couple of products  — one is PowerPoint. He started up a lot of companies that have been very successful.

I pitched the idea of Voalté to him. I didn’t know why at the time, but he agreed to come on board as our CEO and help guide us through this progression of the building of the company. He came on board in November of 2008. By December, we were able to raise our first round of financing. We were able to open up our office in Sarasota, Florida. It was really the start of Voalté and the creation of the company.

At that point, Rob was new to healthcare. I had two years of experience, so I was relatively new as well. We went around and — included in this was you — we pulled a number of what we considered ‘industry influencers’, or people who had a good pulse on what was going on in healthcare. We talked to a lot of CIOs that I knew from my previous relationships at Emergin and we asked, “We’re a small startup organization. We’re building our company. Can you point us to someone that we should model our company after? Our services, our support, our software? Who’s the Southwest Airlines of healthcare? Who’s the Starbucks or the Disney of healthcare?”

Surprising to us, we asked about twelve CIOs this and only one of them was able to give us an answer. A lot of them would say, “Well, it used to be XYZ Company until they were acquired by someone.” Or, “It used to be this company, but not any more.” The one CIO that did tell us a company, they told us Epic. It just so happened they just purchased $150 million worth of Epic, so I’d assume that they’re going to say it’s Epic no matter what.

I didn’t realize at that time, what Rob was really doing was analyzing the market. From at least the customers we talked to, no one was taking that vision of the Disney of healthcare IT. So, we started building the organization. One of the first things we knew was we wanted to provide a compelling customer experience, end user experience, to our clinicians and to the people that we provided our software to. We engaged, and lucky enough, Sarasota Memorial right here in our back yard, we were able to meet with Denis Baker. We talked to him about our solution and what we planned on building and he agreed to enable us to work with Sarasota Memorial as our first ‘Development Partner Program’.

I know the term gets tossed around a lot, but what we really proposed to Denis and Sarasota Memorial was, as an organization, we need a hospital to work with to get feedback. Not only from an IT perspective on how we design our solution, but from a clinical perspective of what the nurses need and want for a communication solution at the point of care. From the very beginning when we were designing the user interface, the way the solution was going to work, we pulled in feedback from the nurses at Sarasota Memorial. We brought in nurses for a nurse focus group. We had a big whiteboard up on the wall and we drew out exactly how the application was going to look and we mocked up drawings. The nurses told us word for word, “This is how we want Voalté to work for the voice communication, the alarms, and the internal text messaging.”

But, we also knew that a lot of times your end users don’t really know what they don’t know. We put a button inside our application called Voalté Feedback, which would let any one of the nurses, the clinicians, the end users to hit that button and send a message to us. Things like, the buttons are too close or, this feature isn’t working, or that function is bulky. Before we know it, we went live with the pilot in June, and within a month we received over 90 feedback messages. These clinicians were telling us non-stop all of these problems with our solution from a text messaging perspective, the way things looked. We took all that feedback and we completely refactored our iPhone workflow.

A great example that was told to us was, a lot of the nurses came back and said, “We love the text messaging functionality. It’s fantastic, but I can’t read the text. The font size is way too small. You have to make it bigger.” No problem — we made the font size really big. Then we got a message back the next day, “The font size is way too big. I want to see everything on the screen.” I went to the engineers and said, “Guys, we’re stuck. We’re caught in a Catch-22. There’s no way.” They said, “Well actually, we can. It’s an iPhone.”

What we created was a variable font size so nurses, on the fly, can change their font. Also, we save that to their user profile so when they pick up an iPhone, they log in to start a shift, and we load up that component of their profile. It really, at that moment, struck us that we can’t just build a vanilla-flavored application and push it on everyone. We need to take this feedback and customize it for each one of the individual end users. From Day One at Sarasota Memorial, we took that feedback to build our application.

Since then, we’ve expanded the pilot at Sarasota Memorial. We’re out to an additional four units there. We have an enterprise agreement, and we’re going to be rolling out across the whole hospital. We signed a second hospital out in California, which was Huntington Memorial. They went live in December with a similar solution to what Sarasota had. We signed our third hospital that we’ve installed and we’re not allowed to announce yet, but it will be coming out in the next few months. We’ve recently signed hospital number four; and I’m currently waiting for a phone call which will hopefully be hospital number five, which should come in this afternoon.

From our perspective, it’s a really exciting time to be in this industry. We think we have a lot of momentum heading into HIMSS, and we’re real excited.

I did a survey when the iPhone first came out. Most readers didn’t think it would really have any healthcare impact. What did they miss?

I would really think that again, just looking at your reader base — which obviously, I have a tremendous amount of respect for — I think that they were all looking at a fad from a technical perspective; the technical components of the iPhone and if it will work, or won’t work in healthcare.

Apple has a way of making products that are unbelievably easy to use. I think that’s really what the game changer was. They reinvented the way you interact with a phone, from the swiping, the pinching, the zooming, and Apple just does a fantastic job of paying attention to very specific details. If you look at everything they build, down to your little icons on your MacBook — how they bounce up and down, the little blue dot appears. It’s just an unbelievable amount of attention to user detail and what users are going to look for, in regards to interaction with the device.

I think the compelling thing that was overlooked was just the user interface and the user experience that Apple is able to provide. I think that was probably the missing component, if I had to guess.

Do you think working on the Apple platform with Apple developers forces you to think more about usability than the average software developer working on a client server-type application?

Off the bat, in the Apple SDK, there’s actually whole sections dedicated to keeping and conforming to the Apple user interface. They provide a lot of guidance and feedback on how to make all the applications look and feel somewhat the same. When you’re applying to the apps store, they actually go and review… I don’t believe they’ll reject you for moving away from your standards, but they try to greatly encourage you to move down that path, so all the applications somewhat look and feel the same.

If you look at our application, it looks great. When we go into some of the hospitals and we install or we go live and we’re there training; if a nurse has an iPhone on their own, they really don’t need Voalté training. Our phone application looks just like the iPhone application, our text messaging looks exactly like the Apple text messaging, the SMS application. We actually had to stop users from being able to access our application during training or else the first class we let them go into it, they went wild using it. It’s just so intuitive.

I think it does, to a degree, force developers to kind of rethink what they can do. You always hear people complain about multitasking, and you can’t run background applications on the iPhone. Having one application open and some of the limitations that Apple puts on you are actually mixed blessings; it makes you think, “Well how do I use all the real estate on the screen, and how do I make sure my current application that’s running is making the most of what tools I have available?”

On the flip side, the iPhone developers probably don’t have much healthcare experience. Is that a problem?

I don’t think it’s a problem, because what we see happening right now is a number of large companies that have a great deal of healthcare experience are just going out and picking up either iPhone developers, or iPhone consulting groups to develop their applications for them.

Epic has released Haiku as their EMR application. Allscripts, they released an application last year. I’m sure Cerner and the rest are following suit. I doubt that they had their own internal developers build that. They probably went out and hired iPhone developers and provided the healthcare expertise. I do think that there are not a lot of startups in this space that have created iPhone applications. Specifically, iPhone healthcare-focused companies like we’ve done, but I see that changing in the next few years.

Overall, I think right now Apple claims 700-800 medical-focused applications. I think the platform in healthcare has an unbelievable amount of momentum, and with the iPad coming out, that’s going to continue to grow. Both the iPad and the iPhone run on the same development environment, so even though right now there might be a shortage of iPhone healthcare-specific developers, I think that number is going to continue to grow exponentially. You don’t need to be a healthcare expert to develop on the iPhone. You need to be an iPhone developer, and there are plenty of those out there in the field right now that can get picked up by healthcare companies to help develop the application.

How do you help solve the problem of both device and information proliferation for doctors and nurses?

I think the first component is really understanding the workflow of where that information’s being generated from and the different criticality of the information. If you look at where these pagers and devices were receiving the alarms and notifications from, you had a wide degree of things being blasted to caregivers. An example might be they’ll carry one pager, which is their code blue pager or their rapid response team pager. Then the next device will be a pager that goes off for TeleTracking if they have a patient coming into the floor.

I think the key is understanding all of the information that’s getting blasted to the nurse, and from what different systems it’s getting blasted. Then, creating a workflow model of “these are all the alarms, notifications, phone calls, text messages, that are being received,” and then building and orchestrating a plan around the three components of communication — which are really voice, alarm, and text messaging.

There are things like; we can associate specific ringtones for different types of alarms. For example, at Sarasota Memorial Hospital, some of the devices they were carrying would go off for a nurse call — a call bell alarm — then another device would go off for a bed call. They had Stryker Smart Beds.

What we actually did was on the iPhone, we have different ringtones for the different types of alarms. For example, if someone hits a nurse call alarm and it goes off on the nurse’s iPhone, we play the exact same ringtone in .wav file that will be played at the central station. A very subtle beep, beep, beep, and they know OK, that’s a call bell alarm. We blast it across the whole unit, and we play the Stryker Smart Bed .wav file of what the noise is supposed to be when there’s a bed call. We have unlimited customization and configurability to make sure that ringtones are played for the right different types of information.

Again, you want to be careful with that because you don’t want this device with 20 ringtones. It’s that careful balance of getting the right amount of ringtones and different types of notifications, but not overburdening the nurse with all this different information they have to memorize.

Do customers perform ROI justification to buy your product?

I think there’s a couple components that they can look at for ROI. Off the bat, we can replace a number of devices. The first would be what we considered a legacy voice over IP phone. We could take that price, and there’s usually the hardware costs associated with that; the charger cost, the external battery cost, and typically, there’s a software license cost associated with that device.

Next are all of the different pagers that are going off. Just off the bat, it’s consolidation of devices. You take a PDA, the pagers, the voice over IP phone; combine them together into one device.

But beyond that, when you look, there’s a number of studies that have been done; specifically by University of Maryland. It showed that overall, the average 500-bed hospital will waste approximately $4 million a year in wasted communication, which is the telephone tag back and forth between caregivers and people not having the right information to act, or the hunting and gathering of different caregivers.

The second area where we would see a return on investment is a better utilization of the nurse and clinician time. Nurses could access the right information at the right time through our solution so they could therefore, be more effective at their job.

The third area that hospitals are interested and where there’s more of a long-term ROI, is improvements in Press Ganey and HCAP scores. Through our solution, any pilot that we roll out in, or any hospital that we install in, we go and we do an analysis of the hospitals; specifically, in Press Ganey and HCAP. We look at noise in and around the room, and response time to call bell.

We look at those metrics before Voalté goes live. Then after Voalté goes live, we do an analysis to see if we were able to produce an improvement in those scores. In the long term, that should overall, greatly affect the hospital’s ranking from a patient satisfaction and a patient safety score, which would make them a more prestigious hospital and hopefully, bring more patients in to the facility.

I would think caregivers think it’s pretty cool that they get to turn in their analog pager and be given an iPhone in return.

Definitely. There is an angle of nurse retention. You go into a hospital and you tell the nurse that they’re going to get iPhones. I’d say 90% of the nurses are thrilled, they’re excited. They can’t wait to see it, but you get the flip side of that as well. About 10% of the nurses will actually step back and say, “No way.”

I’ll never forget this. It was one of the most memorable points at Sarasota Memorial, our first installation. We went in for training and I was part of the training team. I love going in and talking with the nurses and getting that end user feedback interaction. We were doing training and there was one nurse who was kind of in the corner while we were doing training. I went and said, “Are you excited? What do you think?” She looked at me and said, “I don’t even use e-mail. There is no way I am using this iPhone.” She puts the device down and said, “I’m not going to use it.” I tried to convince her and she just said… I think she was a few years from retiring and had no interest at all in learning this new technology.

Later that night, I was there during go live. I looked and I saw her and she looked frustrated. She was picking up the phone and she kept dialing a number and she’d slam the phone. I went over and I asked her, “Is everything OK? Is there anything I can help you with?” She said, “Well, I’ve been trying to reach the floor pharmacist all night and I can’t reach him.”

For their workflow model, they have a floor pharmacist who covers a whole tower. At night, you have to send a page to him and you don’t really know if the page got out or if he’s going to be able to respond. I said, “For this pilot, we actually provided the floor pharmacist with a Voalté iPhone. Why don’t you try sending him a text message?” She looked over at me, kind of with a sly face and she grabbed the device. I walked her through, she hit the Quick Message button, and she sent it out in a couple of taps. About two minutes later, she got a response and it was the floor pharmacist saying, “I’ll be there in five minutes and I’ll drop off the meds.”

Typically, it takes her a few hours to find that caregiver. She looked at me and said, “Well, I guess it’s not that bad.” So, I don’t consider it a complete victory, but it’s finding the specific users and spending the time to educate them on different examples of how to use the technology. Even though she didn’t fully embrace the solution afterwards, I think that we were slowly starting to win her over. I think that’s one of the areas, as a company, which we’re really attempting to differentiate ourselves in the whole customer experience and our end user experience.

Remember at the beginning of our conversation I talked about Voalté feedback? Originally, when we built that, our whole focus was we need to get the best features, the best ideas, from our end users. We want to have unfiltered feedback, from a feature standpoint, from our end users. Then what we realized during the pilots was the nurses started sending us messages back from a support standpoint. They’d send us a message like, “I forgot how to turn myself into busy mode,” or, “How do I add a custom quick message?”

It dawned on us that this is the absolute perfect tool for end user support. The way it works is the nurse, again, they could hit that button Voalté Feedback. They send a message. We actually have our own Voalté server in the cloud that receives that message, dispatches it to my personal iPhone or our support team’s business iPhone. We receive that message and we can login to the Voalté server remote and establish bidirectional support communication with that individual caregiver. So off the bat, any user of our software, at any moment, 24/7, could have instant communication, from a support standpoint, with one of the Voalté support employees.

Also, from a technology standpoint, you’re trying to drive innovation, new features. We have unfiltered feedback from every single one of our users, which is huge. It’s kind of the same concept of Twitter. Our Southwest Airlines, Starbucks — they all have a Twitter account and if you complain at (@) them on the Twitter account, they’ll respond by responding at (@) you. We’ve got that same philosophy, that same methodology; but we’re applying it inside healthcare to receive feedback, but also support the nurses in the field.

Beyond that, what we’ve also done, from a remote monitoring standpoint is we’re actually able to track not only that the message was sent on the device, but any trouble that happens down to the device level — remote monitoring of servers. That happens all the time, where people can monitor a server. If something goes wrong, you receive an alarm or notification. We, obviously, do that. We keep a VPN connection to every Voalté install server. If any one of the adapters, any one of the components fails, we get notified.

But, we can actually take that down a further level to the iPhones. We’ve customized and designed our solution so we could actually look and see if there are trouble tickets or trouble logs in the iPhone. All the nurse has to do is plug it into the charging station. We can connect to that device and we can reset the firmware. From a remote monitoring standpoint, what’s happened is the platform of having these smart phones at the point-of-care has enabled us to do things like have unlimited feedback from our nurses, from our end users. Have really, unlimited remote monitoring down to the device.

When we first started the company, it was all about the platform enabling the perfect trifecta of communication, which is voice, alarm, and text messaging. But as we’ve been out in the field and learning from our customers, we’ve realized it’s not just about the technology. It’s also about the customer experience, the end user experience, and we finally have the perfect platform to provide that level of end user experience that the nurses really need, such as the Voalté feedback and the remote monitoring down to the actual device.

We’re pretty excited. I guess you could consider us one of those overly aggressive startups that, you don’t start up a company just because you want to drive a little bit of change. You start a company because you want to make a dramatic difference with your customers, with your end users. We not only want to have awesome technology that’s built specifically by our end users, we want to provide and overall amazing end user experience to our customers.

Definitely, I probably sound like a naïve, startup, 27-year-old guy, but I think that passion is really well conveyed in the way that we speak about our company, we speak about our products, and we talk about our customers. If you talk to any one of the Voalté hospitals we’ve installed at, from the end user, the nurses, to the CIO, they’ll tell you that engaging with us, as a company, is a lot different than any other organization.

I think the reason we’re able to do that has really been subject to the influence Rob Campbell, and also, our Chief Experience Officer, Oscar Callejas, bring to the table. Where you have Rob, who’s more of a Silicon Valley startup guru; but Oscar, when we brought him on, his background is in hospitality. He’s worked in hotels for the last 15 years managing these high-end hotels and organizations. When he came in, he brought that whole view of “it’s about the customer experience.”

When I started the company, that really wasn’t on my mind. I was all about the technology, integrating the iPhone; but they brought that flavor to the organization and I think it’s all coming together really well. I’m really happy with where we’re at as a company.

Last question, and this is the one you knew was coming. Pink pants at HIMSS?

Oh, absolutely. Not only pink pants at HIMSS, we wear pink pants at every single installation. It’s part of the customer experience. We come in and we are the pink pants company. It’s part of that whole thing I was just talking about, where we don’t want to look like other healthcare IT companies, we don’t want to talk and act and feel like them. We want the whole experience that customers and nurses and hospitals go through to be different.

When we walk in for the first day of Voalté training, its Voalté day. We come in and we take pictures of the nurses to put in the application, we’re wearing pink pants. During clinical training, we’re in pink pants. Go live support, wearing pink pants. You know, people laugh at it, but the nurses know who we are. They see the Voalté person walking by; they know exactly who that person is for help and support.

At the HIMSS conference, last year we were all kind of sitting around. Little startup, Voalté. We were looking at all the booths and we saw GE with an 800×800 foot booth and dancers and DJs and everything else. We looked at each other and said, “God, how on earth are we going to get any attention at this conference? No one’s going to look at us. No one’s going to even acknowledge that we’re there.” Someone said, “Well, why don’t you wear pink pants?” Everyone kind of laughed, but we looked around and said, “Why don’t we wear pink pants? What do we have to lose?”

We actually e-mailed Inga because I was kind of worried. I was the only one who had actually ever been to HIMSS before, so I know it’s a suit and tie event. I was thinking, “Are we not going to be taken seriously?” Then, Inga threw it up on your blog so we really had no choice, so we did it. Believe it or not, it’s kind of become a rallying cry for the company. The pink pants, in a way, symbolize what we’re about. We’re different. The experience is different and it’s a lot of fun. You’d better believe the pink pants will be there.

HIStalk Interviews Todd Johnson

February 24, 2010 Interviews No Comments

Todd Johnson is president of Salar, Inc. of Baltimore, MD.


Give me some background about the company and about yourself.

The company was founded in 1999. It has been split in half, in terms of our corporate development. In the first half, we were really a healthcare IT consulting services firm, and got involved into all sorts of very interesting things, including helping design and implement the technology surrounding the Johns Hopkins point-of-care IT solution. It was a challenger to Epocrates in terms of point-of-care, clinical content, and medication references. As well as building an EMR at the Centers for Disease Control.

We did a whole wide variety of things, but honed in on a series of products in 2004-2005 that are really focused around inpatient physician documentation and charge capture. Essentially, capturing H&Ps, daily notes, discharge summaries, consults — reducing transcription costs and increasing physician charge capture, and ultimately benefiting HIM. 

We migrated the entire business into the focus around acute care physician documentation and charge capture. We’ve had long success with some of the really large academic medical centers. Now we’re getting success with regional medical centers, community hospitals on the East Coast, and the Midwest.

We’re growing the company organically … the traditional garage shop story. A couple of buddies and I graduated college and sat down and said, “What do we want to do with our lives?”, built a company, and we’re still at it.  We’re growing and we’re having a heck of a good time doing it.

What’s your answer to the problem of getting physicians to document electronically?

It all boils down to physician adoption. When we started our technology solution that we now call TeamNotes, I think we were very lucky in that we were extremely naïve about physician documentation. We rounded with physicians for months and noticed a couple of things. 

We noticed that there’s a wide variety in how physicians like to document, in term of their workflow. Some like to take notes on rounds and sit down and dictate it later. Some like to do their notes while they’re on rounds and do their billing later.

We wanted to encourage a system that had a wide variety of workflows as well as a strong user interface. I think paper is seen as a naturally crappy way to document. But I think the benefits of paper are overlooked. It’s fast and very acceptable. If you’ve got your daily notes rolled up in your pocket, for you, as an attending, it’s a very quick thing to access and update those. It certainly falls short in terms of legibility and distribution to others.

What we tried to do was focus on current practice. What were the really good things about paper?  We built our entire platform — in fact, our entire corporate culture — around physician adoption. 

I think, traditionally, most EMR providers look at physician documentation and think that perhaps the primary incentive is payment. Payment is clearly an obvious incentive, but I really think that speed is the number one incentive. That becomes the barrier. You have to put in the hands of the physician something that is fast and effective. 

If you can do that, then the other clinical and financial outcomes occur as a result. But by focusing on speed first, that’s how you harvest physician adoption.

Most of the companies out there started with an emphasis on billing.

Yes, and we’ve been doing CPT coding and physician charge capture for ten years. It’s interesting when you look at the CPT guidelines — how do you make that into a note? You get a lot of feedback over the years if a note is designed too much by a compliance group, particularly if you go into a hospital.

Let’s just say they’re all on paper. Go into a hospital that’s been RAC’ed by the OIG. You start to see these paper templates that have been designed by billing staff that clearly have a design towards CPT guidelines and compliance with CPT guidelines.

The general sense you get from a lot of the attendings is that you’re taking something that was originally intended to be a communication from provider to provider about the status of a patient and turning it into a billing process. The question is how can you automate billing; automate CPT billing charge capture and PQRI capture; but at the same time, put something that’s a meaningful document, in terms of communication, from one provider to the next?

I think that’s why you’ve seen a low adoption rate. A structured documentation tool –  certainly in general medicine — it’s because they don’t tell the story. They need to tell the patient’s story. How do you tell the story on an admission note and simultaneously extract the location, quality, duration for a very complex case? I think that becomes the nuance of designing a documentation solution that works.

When you look at what is important to physicians, what’s the relative importance of application design versus usability versus the form factor that they use?

I think it’s the critical piece. Application design, for us — again, going back to the genesis of this software — we assumed we knew nothing about physician documentation. So rather than building a physician documentation tool, what we built was really a tool kit. What that means is our customers can create any form they want in Word or Visio or Adobe or Excel, whatever tool they’re comfortable with.

Then we essentially overlay the clinical data from the EMR onto those forms. That process of creating your H&P and your daily note and your discharge summary, as well as designing the workflow between those notes — that is the heart of the process. That is the number one reason that we’ve got happy doctors running around and we do these big bang implementations covering the majority of discharges across multiple facilities in a single week. It’s because the physicians are involved in the user interface and the user design.

In terms of form factor, I guess I interpret that to be a question, really, of devices. We see a wide variety of devices. We had originally designed TeamNotes for the tablet PC environment. We thought tablet PC was going to be the winning platform for acute care documentation. I think what we’ve learned is that in some instances, that’s correct. Some doctors like a tablet PC … like it a lot. 

Others prefer to dictate and you just drag it on a desktop. Others use laptops on wheels. I think what we find is that, across our different customers, different strokes for different folks. although tablet PC probably makes up less than 15% of our customer profile, which is in retrospect, it makes sense.  But I think I would have been shocked if you told me that five years ago.

You mentioned the difference between a form metaphor versus a screen metaphor. Why doesn’t everybody do it that way?

I don’t know. I think that if you were to survey doctors that have tried structured documentation and not been happy with it, you’d probably end up with a lot of feedback along the lines of, “Well, it took too many clicks and it was too onerous to drill down.” That’s the type of stuff you would hear.

A form metaphor works extremely well, as you can see the entire note on a single screen. You might have to scroll, you might have to jump around on it, but we provide navigational aids for it. It’s a very natural environment and it’s one that folks have been used to using for a long time. It works. 

I think the real benefit is in either environment, you really have to capture structured data, so a form is nice because it’s easier to look at. It’s easy to absorb, it’s easy to edit. But at the same time, if you can capture structured data from it, you’re serving the purpose of really contributing to the electronic medical record, automating coding — all those other things.

I would assume a non-form based physician documentation solution could work. It could work well, so long as it’s designed to be very, very fast for the provider and easy to update and get in to. 

The thing about acute care, as you well know, is your receiving information throughout the day. It’s not like you’re sitting down and just building out your notes start to finish and then signing it and moving on to the next. Certainly some providers work that way, but more often than not, we’ll see providers start their notes in the morning, go on rounds, update their information while they’re on rounds, and maybe sit down and complete them later. They’re always jumping back in. The navigation of the application needs to support that workflow pretty well.

What about the problem of having so much documentation captured that the important stuff doesn’t stand out?

I guess we learned, with our customers, that documentation — you don’t start and finish it. For customers that do it right, documentation is a process of continuous improvement, both in clinical terms as well as financial and administrative. I think we have seen some customers begin to design documents that become too detailed, or contained too much information to get lost in translation.

I think you need good organization and a good dialogue, continuously, about how do we make these better?  Then, provide the tools for rapid turnaround on that. I think one of the things that’s really fascinating about a Salar implementation is that it’s not uncommon for us, for instance, to go live with a service line, then you spend maybe two or three weeks designing their notes with them and getting them to the point where they own that note. If it’s their agency and their daily note, it’s better.

But you don’t get the really great feedback until after they’ve gone live. So you go live on a Monday morning, and you get this wonderful feedback from the physicians on rounds. Our process is that we modify the tablets. We put them into production on Tuesday. Then on Tuesday, the physicians now see that they can impact the solution, that they have ownership of the documentation, but that the system supports rapid cycles and rapid iterations.

By the end of the week, you’ve arrived at a place where you’ve got clean, concise, quality notes that are good for patient care, but also good for efficiency and timing and that support the billing process. That rapid turnaround time is really important. Hopefully, it’s honing towards better documentation over time, not worse.

How would you characterize the need for systems to offer that level of on-the-fly changes?

I think that’s one of the reasons why we win business. The speed of, not only the documentation itself, the physician on the unit floor using it — but the speed to provide feedback and changes. It’s absolutely critical. Physicians sometimes get a bad rap for being impatient or just tough to work with. We’ve never found that. I think our doctors have always provided good feedback and they get a good product in a timely fashion.

We’ve designed all of our form design tools for our customers to use, as well as our professional services staff to use. Literally, they are drag-and-drop tools. If you can create a form in Microsoft Word, you can create an interactive clinical note that has integrated labs, pharmacy, and allergy test results. It does CPT coding, captures PQRI, and integrates with the workflow of physician service for carry-forward data in a matter of hours. I think that’s just a huge, huge benefit, and we’ve seen that serve us very well time and time again; and serve our customers very well time and time again.

Sometimes we go into an opportunity with a customer and they set the bar low for themselves. For instance, they’ll say, “You know what? We just want to start with daily notes first because we think H&Ps and discharge summaries are tougher.” But they will exceed their own expectations, and very quickly within going live, tackle all the major documents that they need to do throughout their day, and do them in a very comprehensive fashion. Because the tools to support not only the creation, but the editing and migration of those, all exist and are pretty easy to use.

Can you give me an idea about what kind of technologies you used to accomplish that?

We’ve always built on the Microsoft stack. We believe very much in Microsoft as a technology provider.

The general concept is all these structured clinical elements, which exist in the EMR, and I think more and more, we’re seeing a refinement on those. We like the CDA specification, but more specifically, we like CDA for CDT; which is real refined around what structured elements really ought to be captured on their daily notes and our H&Ps, etc.

We’ve got tools that allow you to take any form … let’s assume you’ve created a form in Microsoft Word and you really like the layout … and then can drag and drop CDA for CDT elements. For instance, here’s a chief complaint field, and here’s where we’re going to put some of our family history components, and this is where we want labs. Really, to drag and drop those things much like you would in Adobe Acrobat or Microsoft Visio.  We try to keep it as simple as possible for our customers.

Everything’s revolving around Meaningful Use, which has nothing to do with charging, but clearly hospitals have their own incentive to worry about that.  What are you seeing as the hot buttons for hospitals with regard to charging?

Not so much charging, but documentation, I think, we see as a real big opportunity. 

Meaningful Use has really these two components. One is using certified technology and the other is actually utilizing it. We’ve been able to demonstrate time and time again — in fact, with every customer we’ve acquired — strong benchmarks of use. I think one of the unfortunate things with Meaningful Use, from my perspective — I think it’s probably very different from some of your readers — is that the bar seems to be set a little bit low, in terms of what is the expectation, in terms of the volume. How many notes should be captured electronically and structured, etc.

But I think achieving a wide adoption of certified tools can occur. With Meaningful Use, we like some of the standards around interoperability. We hope to see CDA for CDT become, maybe a platform for interoperability for documents within the hospital walls that would really promote the use of this EMR overlay solution as a way to achieve physician adoption very quickly.

You mentioned that it’s an overlay solution. How do you convince a hospital that’s already paid to implement Cerner or Meditech or Eclipsys to bring another vendor into the mix?

I think what we’ve found is that many of our customers have tried and failed to use those tools. They’ve failed to achieve real physician adoption. I think a lot of hospitals believe, probably rightly so, that they can get their employee physicians on board, or there’s a subset of doctors that they can get engaged. But the speed of those tools has generally been frustrating to a lot of physicians out there.

What’s the cost of not having it online? What’s the cost of not having a comprehensive electronic medical record? A lot of hospitals invest in a core HIS, and then they struggle with the fact that, “Oh, you know what? I’ve got to purchase an entire silo for my emergency department because they’ve got a much better documentation tool set.”

What if you can use a product like Salar to fill all those gaps, but ultimately contribute to your core EMR? So every time I sign a note in Salar, it’s using all the same interfaces and the notes are ending up in Cerner, Eclipsys, etc. and really contributing to a comprehensive electronic medical record?

I think a lot of our customers had been through that and they see that there are better tools in the market to achieve physician adoption. They see Salar as a vehicle to do that very well. At the end of the day, they’re reaching their goals having a comprehensive enterprise electronic medical record.

How will you take what you’ve learned at sites like Hopkins and George Washington to create an off-the-shelf product and a sustainable business?

The way we’ve designed our application, we’ve got a standard code set of all our customers. The variation is forms. What do the University of Massachusetts forms look like compared to the forms at the University of Pittsburgh Medical Center? We’ve now worked with so many different specialties, so we’ve built this really nice library of content and expertise.

So the question, I think, really, is do you package up content for distribution on a wider scale? It’s actually a very interesting question because on one hand, I think content can accelerate the process. If you look at a company like T-Systems, they’ve done exceptionally well at developing expert content for the emergency department setting. I believe they’ve monetized that very well. I believe simultaneously, though, that the process of designing documentation and designing templates is what achieves physician adoption.

Boilerplating content for distribution, you miss an opportunity to really engage the physician and get them on board. I think that’s something we’re working on. While we don’t see, today, us dropping in plug-and-play — you know, here’s your trauma content, or your nephrology content, or cardiology, or internal medicine. We see more of a dialogue with our customers that says, “Here’s internal medicine notes from four different hospitals.  What do you think? Pick and choose pieces from this that you think is going to be good for you.” We may see more of a content distribution model downstream as we grow, but I don’t think the barrier is packaging up the solution so much as getting the right channels to market.

Any concluding thoughts?

We’re seeing a really exciting time not only in our direct business, but we’re now seeing EMR companies come to Salar to OEM our products. It begs the question of what’s the long-term strategy for hospitals that have a single-vendor solution. 

We want Salar to be inside every single vendor out there. We’ve announced four different OEM distribution deals where our partners are taking our core intellectual property and embedding it into their EMRs and making that the core platform for their physician documentation moving ahead. But both in our direct sales and our OEM sales, we’re seeing a lot of growth.

I think it’s really fascinating looking back from 2005 forward. When we first created this technology, I think we were way ahead of the curve. Most of the hospital marketplace was scratching their heads and say, “Geez, physician documentation isn’t on our radar until 2010 or 2011.”  Well, the combination of time passing, as well as the government stepping in and increasing incentives to move quicker, is creating a lot of urgency in this marketplace. 

It’s really an exciting time for us. We’re seeing a lot of growth.  We’re already seeing 30% revenue growth over the last year and it’s only a month and a half in. It’s an exciting time for us and we’re just happy to be a part of it.

HIStalk Interviews Doug Arrington

February 23, 2010 Interviews No Comments

Doug Arrington, PhD, FNP is director of the Office of Billing Compliance of UT Southwestern Medical Center of Dallas, TX.  

Tell me about your work.

I am the director of billing compliance here at UT Southwestern, which means I am responsible for all the professional billing that’s done by our faculty and other healthcare providers. Also, the hospital billing that is done by our two hospitals. Also, all the research billing that is done. That’s what keeps me busy.

It was interesting to me that your background is as a nurse. Does that help you deal with billing and compliance issues?

Absolutely. It helps me in understanding the clinical situation that the providers are in, and the hospital is in, and the researchers as well. It helps me understand what they’re dealing with. It also helps me translate the compliance language, if you will, into an understandable clinical language that they can understand and apply. It makes that leap a whole lot easier for me to do with the providers.

Can you tell me about your team, how it’s set up, and how it reports?

I have a group of individuals that report to me who are compliance auditors. They are certified compliance individuals and certified coders who use the MDaudit tool from Hayes Management Consulting in reviewing the providers in the professional practice. They conduct audits on a quarterly basis of selected providers in their clinical departments that we have here at UT Southwestern. They share their findings with our providers.

I have another group from a hospital side that we do basic audits of the UB-04 claim forms that are done to ensure that the claims have gone out quickly, as well. Then I have another group that we’re just starting up right now that is on the research side, and they are in process. We’re developing our research compliance tool, which looks to make sure that we have billed a sponsor when we say that it’s a research item, and when it’s standard of care that we bill that out quickly to the third party, be it Medicare or Blue Cross/Blue Shield, or whoever it might be.

How would you say your operation compares to that of comparable facilities?

That’s a good question. I would probably say, on average I’m staffed about what most compatible large teaching organizations. I have about 1,550 active healthcare providers on the faculty side. Our hospitals are 100-and-some beds, and the other one is like 235 beds. So on the hospital side; I think for the type of audits that we’re doing on staff for the insurance side, as well.

Then for the research side, we’re just bringing that up. I’m just starting here doing the risk assessment and stuff, so I think I’m appropriately staffed for when you’re on the start up. Then as we go down, I’ll be adding additional staff as we move further into a more active auditing program. I think I’m pretty well staffed for an average organization of our size.

Can you give a high-level overview of the audits that you deal with; the RAC and the OIG audits, and what those means for hospitals?

On an annual basis I do what is called a risk assessment, which takes a look at all the different risk areas that we face here in compliance. For example, I do some data mining looking at basically, what is my top 15% in volume and cost by payer. I look both at federal payers and managed care payers. Then I also look at some data mining issues that are identified by our Medicare administrative contractor here. Then we have the recovery audit contractors and our comprehensive error rate testing, and our payment error rate measurement. Then we have the Medicare integrity group.

So we have a series of audits that are being conducted by external groups that we need to make sure we’re in compliance with. I follow them on a daily basis; go out to their sites — the CERTs, the RACs, the PERMs, and the Medicaid integrity — to make sure that there aren’t any issues.

Then obviously, every October the OIG releases their work plan that I need to be focused on. Throughout the year, they also release opinions and audits results that I need to be tuned in to and to take a look at. This applies not only to the professional practice, but also the hospital side, as well.

Then there are just general things, like the National Coverage decisions that are released by CMS, and the local coverage decisions. I need to make sure that those are programmed in to our claims management system.

Every institution has a hotline, and what we encourage our employees to do is any time they identify something that they may be concerned about is to identify that and to call us. They can be anonymous on that hotline and let us know that they are concerned about something so we can go in and do a complete investigation.

In a nutshell, that’s what I look at in building my audit. What are my priorities on an annual basis is some of those things that I take into consideration.

There’s a lot of activity out there by whistleblowers who get a percentage of the proceeds on claims that are eventually proven to be true. Did that change the way, or the scope, of what you have to do?

It certainly changes the way that I do education here at Southwestern. I make sure that in new employee orientation that we place a very high value on compliance and being compliant with federal rules and regulations. Then, for our key billing staff, we make sure that they receive at least 15 hours of compliance education on an annual basis.

We make sure that we provide ongoing education to our general population, as well. We try to do everything we can to ensure that our staff who deal with billing and coding, and our faculty members that are actually providing the service, have the necessary tools to make sure that they’re in compliance with federal rules and regulations and that they’re following the rules and regulations that we’re supposed to. Then we do audits on the back end to ensure that the claims that go out the door are going out quickly.

When the whistleblower type stuff started, that certainly changed the environment within the compliance area and made what we did, or do, on a daily basis much more visible to an organization when they see some of these large settlements occurring out there. It has something that also helps me, in regards to providing education, that I can use that to provide examples of education and why we place such a high value on it here at UT Southwestern.

If you came in cold to a hospital and were asked, “Tell us what we’re doing wrong,” What kind of things do you think you would find?

That’s a real good question. Probably, I think the hardest thing is keeping on top of the ever-changing federal rules and regulations that impact payment on a day-to-day basis, because the rules change frequently. Just about the time you think you understand the rules, we have new ICD-9 codes that come out, and we have new CPT codes that come out. CMS releases another National Coverage decision or a local Medicare releases a local coverage decision that impacts what we’re doing on a day-to-day basis. Then I have to make sure that information gets communicated down to the healthcare providers, to our claim payment systems.

That’s what I would look at in a hospital, is to make sure that they have someone who’s monitoring those things on a day-to-day basis to make sure that they have that plugged in and they’re following the rules — the CPT codes and ICD-9 regulations and stuff along that line. That would be the first thing – that I would make sure that they’ve got all that stuffed programmed into their claim payment system. That they can only bill out one of these on a daily basis and they don’t have somebody that has a keystroke error and they enter in 114 of them, versus 14 of them. You’ve got to make sure that you’ve got the appropriate fail safe on the back end to catch those types of errors. That’s what I would be looking for when I walk through the door of an institution.

You mentioned education. How much of what you have to operationalize involves having someone else do something, versus what you can do centrally?

We, on an auditing-type perspective, certainly use a little bit of that. But what we try to do is empower the clinical departments to provide education to their providers that is through the lens of their particular clinical specialty. For example, in orthopedics, I want them to be able to provide education to them that is specific around compliance issues that have the lens, if you will, of orthopedics; and then pediatrics that has the lens of pediatrics.

Being a healthcare provider myself, a nurse practitioner, I’ve learned that if somebody’s talking to me about a surgical procedure, I really have a hard time relating to that because I’m not that type of a healthcare provider. But if I’m dealing with something that I understand and I can apply it in my mind in a clinical setting to the type of patient I just saw this morning, that has a whole lot more relevance to me. That’s the reason why I try to make sure that when we provide compliance education, we’re putting it through the lens of that particular healthcare provider.

So in maternal fetal medicine, they see it through the lens of being a maternal fetal specialist. Or if it’s an urologist, they see it through that lens of being an urologist. They can understand that concept, but they understand it how it applies to them. The beauty of MDaudit is that I can build a case profile based on the risks that we talked about earlier. So I can assess that risk in urology that is specific to the urologist and I can provide specific feedback out of MDaudit that is specific to their practice in urology.

Can you tell me the toolbox of tools that you use and how they fit together?

One of the most important things I think I talked about earlier was the case profile that I built for each one of my clinical departments. What it basically does is it takes that risk assessment that I do on an annual basis, and it makes it very specific to each one of my clinical departments.

The MDaudit tool allows me to make one just for pediatrics, and one for internal medicine, and one for OB/GYN. It allows me to take that clinical lens that I was talking about earlier, and then build an audit tool around that so I can identify a specific area that they may not understand, or is a particular risk area that’s been identified by the OIG so I can make sure that we’re doing it correctly.

If I identify a problem, I can identify it as soon as possible and go in and intervene and educate before it becomes a big problem and we end up having to give back lots of money and stuff along that line. That’s the absolute beauty of the MDaudit tool is it allows me to take this risk profile, make my case profile that’s unique to my individual provider that I’m trying to identify in their clinical specialty, and then audit against that case profile.

In general, what advice would you have for hospitals and practices, related to what you do?

Keeping on top of the ever-changing regulatory environment. Make sure that you are hooked into the listservs that go out, and review the federal publications and what’s going on in the courts on a regular basis. There are a number of listservs from the compliance associations and other organizations that will help so you don’t have to go out and review the Federal Register on an everyday basis — that will actually provide that information for you.

Make sure that you have that information at your fingertips because one day that you may miss may have that absolute most important piece of information that can make the difference in your organization between doing it right or doing it wrong. If you end up doing it wrong and somebody comes back later and says, “Why didn’t you know about it?” It becomes pretty hard to defend when everybody’s looking to you to be the compliance specialist. So keeping on top of those rules and regulations is the absolute most important thing. I cannot emphasize enough.

Any concluding thoughts?

You know, I think that the compliance arena is an ever-changing environment. Education — my own personal education, as well the education as a provider — is absolutely critical. Tools that we have, such as MDaudit and MDaudit Hospital, help us communicate specific, filtered compliance education back to those providers.

I think that that’s the most important thing that we be able to do, is to provide feedback that is meaningful to that particular clinical provider. Be it a healthcare commission, or be it a healthcare institution such as a hospital or home health agency or whatever, that they can understand it through their particular lens.

HIStalk Interviews John Santmann

February 22, 2010 Interviews 3 Comments

John Santmann, MD, FACEP is president of Wellsoft.


What are the key issues in the ED that you’re dealing with for your clients?

There’s lots of them. Operational efficiency is probably the single most important thing that we do. When we go into an emergency department, we really take a look at the whole department from the top down — not only the ED, but outside interfaces with other departments, registration, and so on — and provide a lot of essentially consultative services to improve the overall efficiency of how the department functions.

Of course, a lot of that involves folding in the software as a tool, but it really goes well beyond that. That’s kind of a comprehensive thing, but it’s very, very important.

ED patient satisfaction is always a key metric for hospitals. What are their typical problems?

Probably one of the biggest issues with patient satisfaction is length of stay. That’s a metric that we pay a lot of attention to. We have specialized reports that analyze it and break it down into different steps and so on.

After people implement Wellsoft, the length of stay typically falls dramatically, which not only impacts patient satisfaction, but it also increases efficiency and allows you to see more patients in a smaller space. In essence, it also helps overcrowding.

What should CIOs know about emergency department workflows that may not be obvious?

Boy, where do I begin? Emergency department workflow is not typically something CIOs spend a lot of time with. I think if anything, I would say it would be great if there was a heightened sense of awareness of the issues with workflow in an emergency department.

Everybody works in their own world. CIOs have a lot of demands and pressures and they get pulled in all directions. What we find is the details of emergency department workflow are not something that they have time, really, to address. They have a lot of other demands placed on their time, so I think it’s beneficial that they simply recognize the importance of the ED workflow.

We’re seeing more of that now. ED workflow, and also workflow in other parts of the hospital, are becoming recognized as an important issue. In fact, a couple HIMSS ago, the whole word ‘workflow’ — we’d been using the word workflow for as long as I can remember, but it’s really become a lot more popular in the last few years. I’m glad to see that.

What are hospitals doing with ED patient kiosks?

Mostly experimenting. We have several hospitals that use patient kiosks for registration. Not really registration, it’s to simply get yourself into the Wellsoft system so everybody knows you’re here. There are certain advantages to it. In general, in my opinion, if you have a good functioning, quick registration process, it really obviates the need for a kiosk to get the patient in the system in the waiting room.

So, you don’t see it taking hold across the board?

I don’t think — at least in an emergency department waiting room — the idea of using a patient kiosk as a way of getting them into the system initially; I’ve got mixed feelings on. I think that there are certain, select situations where it can be beneficial, but I think for the majority of facilities that have the resources to do what we call a quick registration process, I think it’s really not helpful.

What a quick registration is – basically, a patient walks into the department and is met by a human being who quickly just takes down their name, chief complaint, and date of birth. Usually, that’s about it. So it’s three or four quick pieces of information and that gets them in the system, which is really, for the most part, all you’re going to catch with a kiosk.

I believe that the personal touch of having a human being meet you, rather than a security guard pointing over at a computer in the corner of the waiting room, is a much more people-friendly way of doing it and probably safer too.

Some surveys claim that emergency departments are choosing to move toward the EDIS of their primary systems vendor. Are you finding that true, or is best-of-breed still alive and well in the ED?

I think best-of-breed is very much still alive and well. We’ve seen this pendulum swing back and forth several times now. I think the pendulum’s definitely swung more towards the pursuit of a single-vendor solution. I fully expect that pendulum … well, I think it’s already starting to swing back a little bit, and I think we’ll see it swinging the other way.

If you look at the KLAS ratings, for example, on EDIS vendors, virtually all of the top-rated EDIS vendors are niche systems, and all the lowest-rated systems are hospital-wide. You hear a lot of talk about the expense of integrating multiple systems, but what you don’t hear quite so much about is noise about the inefficiencies of a single-vendor solution. In terms of productivity in the emergency department, I think the niche vendors really have it, hands down.

When you look at the dollars associated with that, they’re huge. You’re talking about very large sums of money essentially being saved by using an efficient system like Wellsoft. It’s a relatively low cost for integration. I think when you take a real close look at the true cost benefit of a niche system like Wellsoft versus a hospital-wide system, that it’s a pretty clear decision.

There’s a huge marketing engine behind the larger vendors, so it’s a battle that’s been going on for 15 years that I’ve been involved with it. To me, it’s really nothing too new. It’s sort of more of the same all over again. I see the pendulum going back and forth.

What is the role of ED physician users in demonstrating meaningful use for hospitals? Have you figured that out?

I don’t think anybody’s figured that out. It’s still very ambiguous. The whole role of emergency medicine in meaningful use, in my opinion, is not clearly defined. They came out with the interim final report, same as last year, and it really doesn’t specifically address anything about the emergency department.

For that matter, it doesn’t even really address what individual vendors are supposed to be able to do. It really addresses what the hospital is supposed to do, and then what the private physicians are supposed to do.

Where we have some general idea of what the hospitals are required to do as an institution, I don’t really see much direction in terms of how they have to accomplish that. Which is really a good thing, because at least so far, it seems like they’re free to accomplish those overall goals in whatever manner they see fit. They could accomplish those goals with multiple niche vendors, or if they want to accomplish those goals with a single-vendor solution, they can try to go at it either way, providing that those vendors they choose are ultimately certified.

Of course, Wellsoft’s been certified since 2008 for CCHIT, but even the certification process still needs to be clarified further. Right now, the only thing that’s clear is that CCHIT will be a certifying agency, but there may or may not be others that are approved over the next years.

The typical cases cited in interoperability discussions always involve an unconscious ED patient. Is that a common real-life occurrence?

I’m not sure that the question is correct. An unconscious ED patient certainly is a common scenario that we’re asked to walk through as a demo process. There are certainly a large number of unconscious ED patients that show up, so I think it’s imperative that any EDIS be able to handle that situation.

But in terms of interoperability, the only thing that immediately comes to my mind is the idea of getting their previous meds and allergy list from an outside system. That’s one thing, but there’s a lot more to interoperability than that.

So to answer your question a little bit more directly, we have a multitude of different mechanisms by which we can enter a patient into the system that have either no identifying information, such as typically in a trauma patient, or somebody who has a limited amount of identifying information, perhaps a driver’s license.

The idea is you get them into the system and then you start working on them and doing what you need to do. Getting prior lists of meds and allergies, I can tell you as an ER physician, typically someone knows something … usually a spouse or a relative, or somebody with the patient. But in case there’s not, it would be useful if you happen to see a driver’s license, if nothing else, with a trauma patient, then you would of course want to pull the meds and allergies from any other system that might have it.

There’s two ways to do that, currently. One is to pull it up in Wellsoft, assuming you have a driver’s license with just the name. You can pull up any previous information that’s been entered into Wellsoft, and all of our sites support that currently. Secondarily, you can pull medications from other systems.

For example, we have a site at CentraState where we went live last month with a medication reconciliation process by which the outside system sends a Wellsoft medication list; it is modified inside Wellsoft; and at the end of the visit, we send it back to the central repositories. That’s a very rapid development in integration. We are spending a lot of time and energy in our R&D right now, expanding those kinds of functionalities.

Is that clear as mud?

Clear as mud, yes. I may have misunderstood what you said on this, but you said if the patient has been seen in Wellsoft. Is that any Wellsoft system or just the one in that particular facility?

Good question. That would be any Wellsoft system that is part of that hospital enterprise if it’s a hospital system, but it certainly wouldn’t be a Wellsoft system on the other side of the country.

Got it. You’re involved with several HIT standards groups. What developments are you seeing there and what remains to be done?

As you may or may not know, we have one of our VPs actually on the CCHIT EDIS group that helps define the CCHIT standards for EDIS certification. I don’t know how much you want me to go into it, but currently things are a little … let’s stay positive here. It appears that the whole certification process is at a stage of sort of reorganization and I don’t think it’s especially clear right now exactly how the details are going to fall out.

It’s a very challenging task, both technically and politically. CCHIT is, in my opinion, scrambling to reconfigure their certification process to conform with the interim final regs released at the end of last month. There are a lot of decisions that haven’t been made yet, let’s just put it that way.

I don’t think anybody really has a good handle on how all that’s going to fall out. But one thing you can say with a fair degree of certainty is that CCHIT will remain as, if not “the” certification agency, certainly one of the main certification agencies.

What’s it like going from being a practicing physician to being a CEO of a software company?

It’s different. I shifted gears for a couple reasons. One is I love technology and I love gadgets and software. The main reason is I love to take something and make it work better.

Practicing in emergency rooms, I just saw a lot of opportunity to help improve patient care and improve the practice of medicine. That was a very strong motivator for me to want to develop software; be involved with software development that takes a good functioning system and makes it a really great system, being the emergency department.

The challenges are certainly different. When you’re running a business, you’re never off duty. When you’re working an emergency department, at least at the end of the shift when you finish all your paperwork and you go home, you have a reasonable degree of certainty that you’re off duty. You’re always on. I’m a bit of a workaholic and work a lot of weekends and am very engaged in the process of product development here at the company.

It’s a lot of fun, a lot of great people here, a lot of challenging work. I think we’re blessed with the opportunity to make a really significant impact in the emergency departments that we work in. Both jobs are very exciting. Working as an ER physician is an exciting job. You get a real hands-on, minute-to-minute feeling like you’re making a difference. Working in a company like Wellsoft, I get a lot of the same feelings; they’re just bigger. I have an opportunity to impact more places more of the time.

What’s the average ED going to look like in ten years?

That’s a great question. I’ve been answering that for 15 years now. I think in ten years, every ED is going to have some kind of an EDIS. Or, I would say 80-90% of EDs will have some kind of an EDIS. I think care will be done more efficiently. I think systems will be better connected, and overall, patients will get better care. I’m very optimistic about the future and healthcare.

Notwithstanding all the political wrangling that’s going on now, I think that in terms of the actual administration of care in the emergency department, I think it will continue to improve. I think that the emergency department is a place that, as a rule, is filled with really hard-working, dedicated people that honestly want to see the best outcome for the patients.

I think if they’re given the tools that enable them to do that effectively, that they will recognize and grab onto those tools and do the job as best they can. They don’t always have the tools. There are often obstacles and problems and politics that get in the way, but if the political obstacles can be improved at both a local level and a national level, then I think the future looks very bright for emergency medicine.

HIStalk Interviews Cameron Powell

February 20, 2010 Interviews 1 Comment

William Cameron Powell, MD is president, chief medical officer, and co-founder of AirStrip Technologies of San Antonio, TX.


Tell me about yourself and about the company.

My name is Cameron Powell. I’m actually an OB/GYN physician by training. I don’t practice any more; I haven’t for about two years. I currently serve as the president and chief medical officer of AirStrip Technologies.

We are a medical software development company that is completely focused on remote patient monitoring and telehealth, with a focus on mobility, primarily in our niche capabilities and technologies to deliver a real-time historical waveform information to physicians and nurses anytime, anywhere, on mobile devices like the iPhone, Blackberry, and mobile Google Android.

The company was actually founded about six years ago. We think we really started this past June when Apple chose to feature AirStrip during the Worldwide Developers Conference in their keynote address. Things really changed for us at that time.

Six years ago, we had a focus on trying to develop a technology that would clearly work to mitigate risk and improve patient safety and improve communication between physicians and nurses when physicians are temporarily away from the caregiver environment. Given my background in obstetrics, we started with the AirStrip OB product.

Tell me about the components of AirStrip Observer.

The AirStrip Observer suite is really built off of a platform referred to as AirStrip RPM or Remote Patient Monitoring. AirStrip OB was the first product that was built off of that platform. That platform is basically a completely reusable and scalable software platform that we spent many, many years developing, which allows us to very rapidly roll out additional mobility solutions.

AirStrip OB is actually the first FDA-cleared solution build off of the RPM platform, but we have additional solutions that we’re awaiting FDA clearance and have already been submitted. Those are the AirStrip Critical Care and AirStrip Cardiology products that are currently submitted to the FDA.

We have several other products that are currently in our pipeline that are being built off of that RPM or Remote Patient Monitoring platform that we developed.

How hard is it to get FDA approval?

It’s challenging. We certainly don’t mind that challenge from a competitive standpoint.

The thing that we like about FDA clearance is it really forces us to maintain a level of quality and control around our software designs that ensures that our hospitals and our physicians, as our end users, benefit from just a great solution that has a great user interface, is HIPAA compliant, and is very secure. But to get FDA clearance, you do have to know what you’re doing. You have to have the right people involved. So it’s challenging, but I will say the FDA’s been a very good group to work with.

Can you tell me more about the actual technology and what kind of folks you have to maintain and develop on it?

We do all of our development in-house. My senior partner, Trey Moore, is actually our CTO, and he is the lead architect behind the entire platform. He is supported by a team of in-house software developers that have really built out the rest of our platform and help us to support all the different mobile devices and the interfaces to various HIS vendors or CIS vendors that are required to operate the solution.

Our application works by interfacing to various vendors or device manufacturers. There are several different architectural formats, but essentially, there’s a system in the hospital that’s pulling that data real time and then securely exposing it through the Internet to our mobile client. I think where our real uniqueness is in how we handle the presentation and the user experience behind the waveform data; the ability to see and interact dynamically with virtual, real-time waveforms, to be able to scroll back over time and pinch and zoom and analyze those waveforms.

One thing that’s important to realize in healthcare, especially with the problems that we’re trying to solve, is that so many decisions are made based off of visual interpretation of data, especially with obstetrics. For example, a vast majority of adverse outcomes in labor and delivery are directly related to communication errors involving the fetal strip, or the fetal heart tracing. So the ability to close that communication gap and deliver that real-time historic data to the physician anytime, anywhere, we think will have a significant impact on patient safety.

The reality is we live in a world where there’s a relatively decreasing number of physicians and an increasing number of patients that need to be monitored. Anything we can do from a technological standpoint to allow physicians to be able to adequately monitor these patients makes a huge difference. We’re in nearly 150 hospitals right now across the U.S. with AirStrip OB and are beginning our international efforts with several large partners.

It’s great in the field of obstetrics to go to trade shows, to go to hospitals, and the physicians and the risk managers and the executives. They all know about AirStrip OB and they’re asking about it. That’s been very rewarding for us. If you look on our Web site, I think one other thing that’s really rewarding is just the enormous volume of unsolicited emails and stories we get from doctors that tell us how AirStrip OB is making a significant difference in their lives, and especially in the lives of the patients they care for.

We’re seeing large hospital systems actually create their own videos about AirStrip OB and promote them on YouTube and through other social networking efforts in the markets, to patients where doctors are talking about how great the technology is. That’s also quite rewarding for us to see that kind of take off in sort of a viral nature.

Do you see the boundary of your product being those applications that involve waveform data, or do you see yourself advancing beyond that at some point?

Oh no, not at all. Currently, if you look at the AirStrip OB product even just at its base technology, when a physician logs on …  First of all, no data’s ever stored on the device, it’s just available during the view session, but they’re able to see the labor and delivery census; the patient name, the cervical exam status, the most recent blood pressures, the admitting diagnosis, and vital signs. They can then drill in further and review all the nursing notes, they can look at medications, they can look at trended data, and then all the waveform data. 

Currently, we present a voluminous but focused amount of data to the obstetrician. When you get into the Critical Care and Cardiology applications, we also provide a whole host of patient monitoring data beyond the waveforms.

Now with the platform, the platform also allows us to pretty rapidly extend this technology to encompass imaging solutions, solutions outside of the hospital. For example, there’s a lot of interest right now in AirStrip with regards to what we can deliver on the ambulatory cardiology front, and in the home health monitoring front. 

We built our solution to truly be data independent. We don’t really care what the data is as long as we have access to the data through our partners or vendors / device manufacturers that we’re able to effectively AirStrip that data in the back end and expose it to the mobile client, really, in a way that hasn’t been done before.

Do you think it will be competitively important to be the one-size-fits-all single solution for doctors, or do you think there can be several niche applications that doctors run separately?

I think there’ll be niche applications, but we think from the broader remote patient monitoring standpoint, I think a single solution that would apply to everybody is very likely. Our idea is that our client changes dynamically depending on who the physician is logging onto the system. We eventually envision the obstetrician logging on to the client and they’re presented with what they have access to in labor and delivery; whereas the intensivist or the neurosurgeon logs on and they’re presented with the information they want to see in the ICU.

In the L&D market where you started, there probably wasn’t much competition when you started it. Do you think once you get into the cardiology and critical care modules that you’ll be competing against a broader array of competitors and also have to figure out how to transition the company into a whole different target market?

Certainly we’re not naïve enough to think that we’re not going to have legitimate competition, but the reality is what we’re really focused on is being first to market and continuing to advance our first mover advantage, from a software standpoint and a UI standpoint, try and stay several years ahead of the curve. I think we’ve done a good job at that and that’s our focus is to try and just stay out in front and continually iterate, continually innovate, listen to our customers, listen to our physicians.

One thing that’s nice about our development team and our development platform is that we can very rapidly iterate and make changes and dynamically adjust to what the market’s demanding, rather than going through traditional software development life cycles that require extensive rewrites. We have some proprietary technology that allows us to do that and adapt.

You’ve also got an advantage in that you have a big footprint in a small segment of healthcare, which I assume then can fund the development and also provide the experience to move outward as opposed to trying to develop the whole package and then sell it to the world.

Yes, sir. Our focus was if we can deliver a solution to the market that works really well that is fast, that is secure, that the doctor is able to use with relative ease that has … For example, even just delivering a solution that can be installed quickly. I mean, a lot of our installations can take a day or two at the most and most of them are done remotely, so it’s not like installing an entire HIS system in a hospital.

We knew if we could deliver something like that to the market from end to end, from the requirements of the hospital IT staff to the CIO, to how hard is it for a doctor to get logged on, to managing all that — if we could deliver all that and do a really good job of it with AirStrip OB, that we would be 80% done with every other solution that we ever wanted to create. Reusing and repurposing what we developed, that’s how it was architected from the very beginning.

Was the plan up front to do more than just L&D?

Yes. We had some very good senior executive guidance that forced us to put the blinders on and really focus on delivering AirStrip OB to the market first, and doing a really good job.

I think where some people fail … they’re tempted to go down every rabbit trail that’s presented to them. It’s really hard to maintain focus to get that last 5-10% done and to really do it right. We had some really good guidance and help along the way that coached us in how to do this just from a philosophical standpoint. It’s probably one of the best decisions we ever made, was to make sure we did AirStrip OB and did it right and made it available to anybody who wanted it.

I have seen the throughput from our company as we roll out these additional applications. It’s just been incredible to watch. I’m so proud of my team and my developers and everybody that I get to work with, to see them have such success as they’re having now. Really, they’re standing on the shoulders of a giant, Trey Moore, who knew from the very beginning that if this was architected in the right way and done correctly, and learning from mistakes he had seen other companies make in his previous career, that we would be able to do this some way. I’m now seeing that come to fruition and it’s really humbling actually, to work with such a great team.

How hard is the integration piece for hospitals to accomplish?

From the OB standpoint, fairly easy, because once we go to the hospital, we’ve already had that integration done with the perinatal vendor.

We have good relationships with almost all the perinatal vendors in the U.S. So if a hospital has any perinatal system — let’s just say it’s the Hill-Rom NaviCare WatchChild system — we can go and tell the hospital that, “You know what? We have an interface. The NaviCare WatchChild, it will handle it all for you. We’ll install the server, or we’ll virtualize it, or we’ll host part of it. The vendor will remotely install their piece, and we will remotely install our piece, and it’s very little required from your IT staff.” That’s one thing that the hospitals, I think, really, really like.

You definitely run into different environments, but from the OB standpoint, it’s pretty straightforward. For the Critical Care/Cardiology solutions, of course we’re not installed anywhere yet, but as those roll out of the FDA we have our beta site that’s already lined up and we will try and replicate the success that we’ve had with AirStrip OB.

Certainly, I think we’ll learn along the way, but we have some really strong partnerships with some great vendors and device manufacturers. They’ve been really great to work with. We think that makes it a lot easier on the hospitals if you can go in and present to them a solution that works, and it’s a breath of fresh air for them to install an AirStrip system.

How is the product licensed and hosted?

Currently, it’s a Software-as-a-Service model; a hybrid software and service model. Currently, the application server resides on site at the hospital. There have been some very large IDNs that will host the Web server component at a central location. That Web server will serve all the hospitals in that IDN around the country. We also virtualize so the hospitals are installed in a virtual environment.

As far as a fully hosted solution, that is definitely something that we’re looking to move towards. With some of our partners, that’s how it’s being designed from the beginning. But it is a subscription model — a hospital, they will pay a certain amount per physician, per month or per bed, per month depending on the product and size of the hospital, the number of physicians, and whether or not they belong to a GPO. There are a lot of different variables.

I think you mentioned earlier that you have applications for other caregivers, like nurses.

We currently have a lot of interest from nurses right now using AirStrip OB, but using it in a hospital. For example, a charge nurse who’s responsible for all of her nurses. Or, she may be in the middle of a C-section, or in a meeting, and she wants to keep track of what’s happening in labor and delivery. She can also use AirStrip OB even though she’s actually in the hospital.

But yes, we see a broader remote patient monitoring-based solutions being able to be used by a variety of healthcare givers in a variety of settings. Right now, the focus is really on physicians and nurses, but I could clearly see applications beyond that scope as we expand. I think those markets and those needs; some are already making themselves available to us just from a recognition standpoint, so we’re certainly interested in providing the technology wherever it’s useful.

I saw on the Web page that the application supports a ton of mobile devices. Which ones are the most popular?

Well, the most popular right now is the iPhone, but we also see markets where there’s a lot of strong demand from BlackBerry users, and some strong demand from Windows Mobile users. Our goal is not to be necessarily focused on the device, but to remain device agnostic. The reality is the market demands change and at this point and time, a large majority of our users are iPhone users.

Mobile applications, in general, improve the quality of life for providers. What’s the impact been for your users, and what opportunities do you see there in the future?

Honestly, because of our regulatory requirements and the nature of our application, we’re not really so much focused on the quality of life of a physician. The reality is where AirStrip becomes most useful, is when the demands of a physician’s day necessitate their periodic absence from the bedside. We’re not trying to ever keep a physician from the bedside.

However, the reality is that there are several times, and often, when a physician has to be away from the bedside. They may be at another hospital, they may be at the surgery center, they may be on call. In those instances, currently they’re limited to having to listen to an interpretation of what is going on over the phone. If they’re away from the hospital, we just want to be able to provide them with this data virtually in real time so they can better assess a situation.

I think, from a quality of life standpoint, that mainly helps them have peace of mind knowing that they’re looking at the same data that a nurse is looking at; and therefore, until they can get back to the hospital, they can more clearly understand the situation and hopefully, it provides a meaningful advice in the interim.

Now, do doctors tell us this does dramatically improve their overall quality of life having this access to this information? Yes, absolutely.

Where do you see the company going, strategically, over the next few years?

We really want to set the standard of care, both domestically and internationally, for remote surveillance from a mobility standpoint — for remote surveillance in healthcare. We currently are relatively agnostic to the market. We want to raise the bar as far as remote surveillance goes. We see ourselves helping to establish that standard of care.

Do you see that happening under the current business form, or do you see either being acquired or acquiring someone else?

I don’t really want to speculate on those types of events. Currently, we’re in a high-growth mode; really growing the company to make sure that we deliver the best technology that we can possibly deliver to both our doctors, who are the end users; and the patients, who quite frankly, deserve the technology. In that effort towards growth, certainly there are a lot of different things that could happen to a company like ours. We remain focused on growing the company, but also keep an open mind as to what might come.

HIStalk Interviews Kipp Lassetter and Robert Connely

February 19, 2010 Interviews 3 Comments

James K. Lassetter, MD is chairman and CEO and Robert Connely is senior vice president of Medicity of Salt Lake City, UT.


What caused the dissolution of CalRHIO and what are the prospects are for the new group?

(KL) It’s really pretty straightforward. A lot has been made of it, but the reality is that CalRHIO was formed at a time when there wasn’t any federal funding. The mission that CalRHIO was working on was how to create a fully connected California. A big part of that was how they could build a sustainability model.

They woke up one morning in the spring to the announcement that the states were going to get funding to do HIE activities. CalRHIO’s business model was such that it would have had to be the SDE, State Designated Entity. They competed in that process. You can imagine that there was a lot of politics involved.

The state decided that they were going to form a new entity. At that point, it was a simple decision that CalRHIO would fold itself into that new entity. That’s the reality of what happened.

Technically, there was no purpose in keeping a staff on for an HIE model that wasn’t going to be deployed. The board remains intact and continues to meet. In fact, CalRHIO recently won the second largest Social Security Administration grant and has retained consultants to roll that out. We’re the participating vendor in that grant.

The whole concept that CalRHIO folded and went away makes for good blog content — not referring to your blog, of course. It really doesn’t reflect reality at all.

What’s Medicity’s role going forward with the new group?

(KL) The new group is moving forward. It’s essentially in the formation phase. We expect some announcements out of the state fairly soon. We anticipate that they will go to vendor selection. We hope to have a very good shot at that relationship.

It must be frustrating to have to win the business all over again.

(KL) When the funding came out for the state, we knew it was going to be good news/bad news. You go to bed playing football and you wake up and realize it’s now baseball and you look a little funny out on the field in your football uniform. [laughs]

Because there was no funding out there, if you wanted to be a sustainable HIE, you had to build a business model. CalRHIO had developed what I thought was a very innovative and substantive business model that was endorsed by CalPERS. Many of the largest health plans in California had looked at it and validated it. They were the ones being asked to fund the HIE because it was speculated that they would derive the biggest benefit from reduced utilization, thereby lowering medical cost.

We had RAND involved, Mercer, Watson Wyatt, and CalPERS as one of the largest purchasers of healthcare services in the country, many of the largest payers that have a national footprint. All were engaged and supportive of the model. However, when the federal government came out with the funding, then the game changed.

The business model as laid out by CalRHIO had a focus on bringing information as a starting point to the emergency room, where you have the highest acuity meeting the lowest amount of information. If you were to look at the one point in the healthcare ecosystem where information will have the biggest impact on both cost and quality of care, I think everyone would agree that the ED is ground zero. That was picked as the first point because the information could have the highest impact in lowering the cost of care. That was certainly something the health plans were very interested in as being a starting point.

With the funding, it was much more about pushing HIE for broad physician meaningful use adoption. There’s a significant shift in the first phase. The big problem right now is the states are going to go out and pick a vendor and deploy. Unless they’ve thought through a sustainability model, these are going to be a lot of bridges to nowhere.

Are business models still important or are people forgetting that fact?

(KL) Business models are absolutely still important because the federal government has given no indication that there is going to be a continuity of funding. What’s happened is that it’s taking a back seat. Before the funding became available, all the entities had to focus on how they would get started. Now they know how they can get started, but what many of them have not yet figured out is how they can remain functional.

Is it easy to get the money but then have to figure it out later?

(KL) Everyone is in different phases. There are people in the planning phase, people that are in an operational phase. The ones we work with emphasize building a sustainability model. I can’t speak to the ones we haven’t seen, but I know that some are very focused on that.

If you look at the big picture of where the federal government is spending money on interoperability and the Nationwide Health Information Network, how would you describe where the money is going and what means to the industry and your business?

(KL) There’s a broad picture. If you look at the first phase of meaningful use, there’s a real emphasis on getting the physicians prepared to exchange data. Simultaneous to that, they’re trying to get the infrastructure in place so they can exchange data.

While the meaningful use requirements for information exchange are coming later in the process, the government knows that these infrastructures can’t be built overnight. They need to begin in earnest right now to be building the infrastructure capable of exchanging that data.

There’s a lot of innovation going on, and I think we’re in the middle of a lot of it, that should have an impact on changing that paradigm. The money specifically is being distributed to the states for both planning and operation of all different flavors of HIE. When you talk to the states, it’s lot like the fable of six blind men and an elephant — depending on whether they felt the trunk or the tail or the side, it was a tree, a rope, or a wall. There’s still a lack of consolidation around the concept of what an HIE really is. A lot of that came through in the recent KLAS report.

How would you characterize the difference between an HIE and a RHIO in contemporary terms?

(KL) There’s a verb HIE, which is the act of exchanging clinical information. There’s a noun HIE where you are an entity that is trying to create the functionality or the action of exchanging information. A lot of people do health information exchange without an HIE or RHIO.

It really is the difference between talking about the noun, an entity that’s called a health information exchange, or the actual action of health information exchange where you move clinical data between one provider and another.

Obviously, you don’t necessarily need a third party. Many of our clients are hospitals doing health information exchange with their affiliated physicians. Many of these are rural areas, where there are not competing hospitals. De facto, it becomes a full community exchange without the need for a third party body to mediate that.

Where does Epic’s private exchange among its users fit in?

(KL) It is a special class or consideration. If two entities are sharing the same technology platform, then it would seem fairly straightforward to exchange that data. Unless those facilities are juxtaposed to each other in a geographical sense, I don’t see the real value in it, but if two facilities are across the street from each other and full Epic shops, then clearly it seems to make sense for them to exchange data.

(RC) One of our bigger efforts across the country is to integrate HIEs and sub-HIEs, small HIEs, and it’s amazing how many of them we are connecting to. Epic’s going to be another life form that is out there. I think we’ll be interconnecting them.

It will kind of resemble the Internet in the end — networks are not nice and pure and harmonious, but they do move data back and forth. I think you can only go so far. Once they reach the edge, even they are getting involved in standards-based exchange. So, they’re part of it. It’s not a model that scales.

Are you seeing new market entrants with all this money flowing in?

(KL) We have a joke — the HIE costumes are flying off the Halloween store shelves. Before there was federal funding, there were few true HIE vendors out there. Now, anyone who’s ever moved a lab result electronically is an HIE vendor. Or for that matter, anyone who has exposed eligibility or done claims processing is now an HIE vendor.

Tell me about the patent for the Medicity Novo Grid.

(RC) It was a patent to move data in a different way, in a distributed fashion, how we create these linked objects that can move data from Point A to Point B and keep the level of synchronicity. It’s actually the core technology around a new platform that we’re introducing at HIMSS called iNexx.

It gave us that architectural underpinning that we can build a massive amount of business on without having to worry about patent trolls coming along and taking it away. We have that core architecture that we’re building the next generation product set on, how we can share information and built systems on top of that open platform.

Where does iNexx fit?

(KL) We believe we have the largest HIE platform deployed in the US and I think the KLAS report substantiates that. We decided to open it up to third-party development. We think this is a really bold move because it allows many different applications to share the same connectivity into a physician’s office, whether it’s demographics or connectivity to the practice management system or clinical CCD connectivity to the EMR system, and it’s bi-directional.

Typically those connections serve one application. Typically they’ve served our infrastructure, but by opening it up, it becomes plug-and-play. We’re taking a page out of the iPhone playbook and exposing an application store, but these applications are certified to safely and securely run on the platform, can be downloaded … there may be three or four e-prescribing solutions. There may be many different components that are together.

One of the big victories we felt we won was when the meaningful use criteria came out and allowed for a modular approach as opposed to a monolithic application. Different vendors can participate and create the modules that in totality allow the eligible provider to qualify for meaningful use, which we believe is a big paradigm shift. We’re calling it, for a lot of reasons, the first Health 4.0 platform.

(RC) Health 1.0 is about content, Health 2.0 about community, Health 3.0 about commerce, and then Health 4.0 about coherence. This is where we tie it back around the patent we got.

What that patent allows us to do is to create a different type of record, a linked object, that we can distribute across the community and tie it together. Everybody has a copy of it and can see what others do. The patient can even be involved in this exchange. In fact, we’re about to undergo some projects in California that bring patients into this shared record that the care team can maintain and conduct business across. It’s a private social network, if you will.

That new thing where we’re tying everything together as Kipp described earlier, bringing in data from the PM systems and the EMRs, from the hospital distribution of things from reference labs … all this data coming together and then shared in a coherent fashion between the various care team members and the patient at the center of this universe. Because this whole platform is structured that way, it’s not a relational database. It’s really designed for distributed object community.

It’s a new approach. The whole concept behind the platform is the way it manages information exchange, making that a natural part of the data structure, not an add-on interface. It’s built at the core. We believe we can bring it to market at a very low price that is a disruptive innovation.

(KL) The iNexx platform creates a virtual, Kaiser-type infrastructure. Historically, to do that full bricks and mortar IDN, everyone needs to be under the same ownership and practices are owned and insurance companies, hospitals, etc. This model allows organizations and affiliated physicians that are not part of the same entity but work together in related health plans to collaborate on a platform and in effect, create a high level of collaborative and coordinated care, much like what happens in the top IDNs across America.

We think the technology has gotten to the point where you don’t necessarily require single equity ownership across an entire IDN. Unrelated entities can collaborate around a single patient and achieve similar results.

From a physician’s perspective, one of the very powerful things about the iNexx platform is that when I bring a patient in view, I have a lot of different applications that could be exposed by many different vendors that I can use to perform actions, such as e-prescribing. Running in real time and next to that view of the patient in focus is a real-time view of activities going on by the care team of that patient that are all on the grid. Whatever lab results, whatever pending actions are on that patient by other practitioners and other specialists, I have that view. That is, from a physician who used to practice, some of the most useful and helpful information you could put in front of a doctor.

We’re in the middle of healthcare reform and a real pressure to lower the cost of care. The average cost of care per capita in the US right now is around $8,000. As the US looks outside the country for models to emulate, when you look inside the US, there’s a massive disparity in spending and quality indicators between the different regions within the United States. If you were to look at California, most people would intuitively say that California’s cost per capita is higher than the national average. The reality is that it’s much lower.

One of the contributing factors is what some people call the Kaiser Effect, which is that highly coordinated, collaborative care IDN. In order to compete against Kaiser, you have a unique delivery model mostly unique to California where you have IPAs that are doing risk contracting, functioning at a higher level of coordination of care, contributing to the net effect of a much lower cost of healthcare per capita than the national average.

If you look at the states that have the lowest per capita cost and the highest quality indicators, it’s the state of Utah. Besides the fact that there are other contributing social factors, that’s also the effect of the IHC network and the dominance of that network in Utah. We believe this infrastructure can create virtual IDN infrastructures that allow that same collaboration and coordination of care. That’s a high-level macroeconomic look at what we’re trying to achieve within the company.

What was your reaction to the KLAS HIE report?

(KL) I think it’s a reasonable start. I don’t think any one vendor is probably completely satisfied with how they got presented.

Obviously we feel we came out of the KLAS report very well. We’re very happy with how we came out. We obviously don’t feel the full scope of what we’re doing or how we’re doing it was represented in that report, but I don’t think that’s unique to us.

The market is moving so quickly that they probably ought to generate one of those reports every other month. With all the federal funding coming in, there’s so much changing. From the time that report got printed, in our opinion, it was way outdated.

What will interoperability look like in five years?

(RC) I personally think it will change significantly, that the technologies that we and others I’m sure are working on will negate some of the higher cost elements of putting together HIEs. EMRs will evolve from the state of recording everything a physician does to get more inserted into the collaborative areas.

I think HIEs will evolve to a Google-like search engine. I think with the technology and the focus on nursing and staffing being more coordinated is going to have a big impact. It’s not really coming from the government as I think the private sector that’s putting the pieces together. The money’s going to just chum up the water for a while.

I believe we’re going to evolve to another state and it’s going to be much simpler, much more distributed, and much more organized. I think the future will be much more Internet-like than the large, monolithic architecture they’re trying to assemble today.

HIStalk Interviews Tom Yackel

February 18, 2010 Interviews 9 Comments

Thomas R. Yackel, MD, MPH, MS is chief health information officer at Oregon Health & Science University in Portland, OR.


Tell me about your background and what you do.

I’m a general internist by training. I continue to practice outpatient and inpatient medicine about 30% of my time, but 70% is in a relatively new position here at OHSU called chief health information officer.

I started out at informatics. I actually came out to Oregon to do a fellowship with Bill Hersh in medical informatics, one of the National Library of Medicine fellowships. Did that for two years, got a master’s degree, and then was lucky enough to stay on at OHSU.

At the time, we really weren’t doing too much in health IT. We had Siemens Lifetime Clinical Record, we had a scanning system, and so we had a pretty good repository, but we weren’t doing any CPOE or anything really challenging or interactive.

After I was here for about two years, the medical group got interested in EMRs when they were building a new building and realized that record rooms would cost too much per square foot. That really kicked off our adventure into enterprise electronic health records.

I guess six years or so later, here we are and we have an almost fully-deployed enterprise electronic health record and the full suite of Epic applications; including e-prescribing and MyChart, and rolling this out to affiliates and all the billing and scheduling and good stuff that goes with it. Reporting, too. It’s just been kind of a neat and fun ride.

How important do you think it is for your credibility that you continue to practice medicine?

I think it’s important for a lot of reasons. Credibility, I think, is one thing, just in terms of making contacts with people outside of the context of the EHR is super helpful to me. Having some people actually come to me as their patients, who I also work with, is kind of an honor and something neat.

I don’t know how other people do this if they don’t actually use the system that they work with, but I would have to spend a lot of time learning a lot of details that you just kind of learn as a user. So I find it immensely helpful and fun to continue practicing.

What are the most important lessons you learned from the Epic rollout?

You pretty much have to do everything right. Health IT is not fault-tolerant, in terms of big projects. You really have to get all the ducks in a row in order to be successful. There are some exceptions to that you make and do better at some things versus others, but I think you really have to cover all your bases to keep the thing moving forward. It’s an uphill battle to do it.

Truthfully, a lot of it is attention to detail. Details are critically important in this. Keeping an eye on those details, making sure all the ducks line up, and trying to acquire the best talent that you can. People that appreciate those details that have a passion for doing informatics-type work. Pairing up with a vendor that shares that same attention to detail and understanding that you have, to get everything right in order to be successful, that has smart people.

Having leadership/ownership buy-in to everything that you do is crucial. We don’t implement health IT for health IT’s sake, we implement it for health systems’ sake. Getting executives behind that and understanding they need to understand a lot of the details too, because sometimes you look under the hood in health IT and it’s a little bit frightening what you see under there. They’ve got to be comfortable with that and be there to back you up when things get tough.

How is your project structured, in terms of ownership, and how did IT fit in the mix?

In terms of the rollout, IT was the project. I don’t want to say ‘owner’, but maybe we’ll say ‘steward’. We organized everything around IT. Once the project was done, we delivered things back to operations. In places where we didn’t have an operational owner, we created one.

The interesting part of this whole project was that initial kickoff. It was our medical group that actually wanted to do this and put up the money to do it. It was the physicians actually paying more than half of the cost. That was instant ownership for them.

Then we organized around IT for the project and for getting it rolled out. When we were done, really wanted to, again, turn the keys back over to the owners and say, “This is your tool, and now it’s yours to use and IT is here to help you.” In the places where we didn’t have an owner, we created one.

As chief health information officer, what I oversee now is a new department called the Department of Clinical Informatics. That group was created because as we sat around the table figuring out OK, now that we’re done, what goes where, we realized there was no owner for all the workflows that we had created in the EHR. There was no group that fronted the customer to IT, or owned the institutional organizational issues that basically came to light as a result of the EHR. So, we created a new department for that.

We also created the Department of Learning and Change Management too, because we didn’t have an operational institutional owner for projects of this magnitude and the ongoing training and change management that would be required for it. That was kind of neat because all of that bleeds out beyond just the EHR and you realize, “Wow, having an informatics department is helpful not just for EHR, but for things that you want to accomplish with electronic systems, or when you need to organize people together around an electronic system to make something happen.”

Likewise, in the learning and change management department, there’s operational changes that may be somewhat enabled by IT. But really, now you’re teaching people how to do their job differently. Not just the new tool, but really do what they’re doing differently, and then how to use the tool to do that. To achieve quality objectives, for example. That’s been kind of neat to watch.

The Department of Clinical Informatics, does that cover just the practice side or the whole facility? Also, what’s the structure and composition of that group?

It’s the whole, what we would call ‘OHSU Healthcare’. It’s both ambulatory and inpatient. It’s multidisciplinary. My title, chief health information officer, was chosen … we didn’t want to make it a chief medical information officer. We didn’t want to create separate silos of medical informatics, nursing informatics, etc. Put it all under one umbrella.

I have two roles. One is this operational person who has this department that I oversee; but then also, I chair one of the four subcommittees of our professional board, our governing structure. We’ve got four subcommittees: safety, quality, operations, and the new one, informatics. People really recognized how important informatics was, and that it really stood up against all those other things that we needed to work on.

In the informatics department we’ve got a director. Then underneath that we’ve got three main groups. One is our clinical champions: physicians, nurses, pharmacists, etc. that work on the project. Our entire HIM department, including coding, was brought in as well.

Then we’ve got a group that came from IT. The systems experts — people that were involved with workflow, design, clinical content creation, and reporting  — all came in as well. We created this team to try to make sure we had people covering the entire lifecycle of project changes and implementation, delivering stuff to users, and reviewing the contents of the quality of the record, which is obviously an important task for HIM.

That’s probably a bigger scope than the average CMIO, or even IT department, to have all of HIM plus the functional IT people. Was that difficult to sell, clinically?

I should point out that we also have about a dozen people that came from IT, and yet we’ve still got our whole IT department which is separate from us. We’ve divided up the responsibilities where we’re more content-oriented, we’re more workflow-oriented and we front the customer. So, we’re the ones that run all the subcommittees of the professional informatics board to figure out OK, well, what are the requirements that people need? How do we prioritize projects?

Then the idea is that we hand off to IT well-spec’d out details of, “Here’s what we need the system to do”, or “here’s what we need built”, or “here’s what we need you to work on with the vendor so they can do their IT role and not get too bogged down in trying to figure out what does the customer really mean when they say, we want this.”

But I think you’re right, in terms of the HIM part of it and really seeing HIM as  now part of informatics. I don’t know that everybody’s doing that, but we thought it was crucial. I think HIM is the glue that holds your record together. They’re the ones who are charged with doing quality reviews of the record.

People complain all the time, “I don’t like the record. I don’t like the notes. People cut and paste too much.” HIM oversees that. They have a huge role in scanning, and scanning’s another piece of glue that keeps an electronic system together because we’re still in a paper world and we interface a lot with paper systems.

Then coding, too — we create clinical content in informatics. The doctors use it, and then the coders read every single thing that they create. It would be a missed opportunity if we didn’t have the coders able to talk to the people that created the content in the first place and say, “Hey, I’m noticing people are using this well” or “They’re not using it well.” Or, “We could do a better job in our templating to accomplish our documentation requirements.” That’s how we thought about it when we put it together.

When you started the project, I’m sure you had some metrics in mind to measure before and after. What kind of measurements have you done, and have you seen the results that you had hoped to?

Looking back, I always feel like we could have done a better job with metrics; and also recognize that a lot of the things that you’d love to know when you do this, you never measured before. We looked at some of the standard things, and a lot of times, the data that we had.

I think one of the most easily available metrics that we had was our dictation. We were dictating pretty much 100% for all outpatient visits, all H&Ps on inpatient, all discharge summaries, all operative notes.

We watched each clinic as we went live and saw what happened to their transcription. It was so interesting. In primary care, it went from 100% to about 2% within a calendar month. In specialty care, it dropped down to more like 10% of what it was previous and then just kind of hung out there. It was an interesting marker of use of the system for me. To think, “Wow, people went from 100% dictation to 2% dictation. They must really be using the system.”

Although I learned that wasn’t really a statistic I should really share with my physician colleagues, because when they looked at that, they said, “Yeah, now we’re typing all our notes. We’re doing all this work. See that? We’re busting our chops to get this done. We don’t like that number.” So, I stopped showing them that. But to look at it as a measure of adoption, I thought it was pretty dramatic.

We saw that happen on inpatient, too. The same thing. We left transcription on. We didn’t take it away. Providers don’t suffer a penalty for using it, other than a workflow penalty of “now I’ve got to read this and authenticate it later”. But they were naturally drawn to it in just about every case, except the one area where it’s only fallen about 50% has been procedure documentation. Surgeons are still dictating a fair number of their procedures, but everything else fell pretty quickly.

Obviously, the financial people watched all those metrics very carefully. I’m probably not as versed in them as maybe I should be, but my gestalt of that is they’re all extremely pleased and happy with what happened. Then a lot of the other things that folks look at, I think, are more subjective and we’re still trying to actually figure out how to measure.

One of my major projects this year has been developing what we’re calling the Informatics Dashboard. There was this great article a couple of years ago that looked at how you measure the success of an informatics project. They looked to the management information systems literature and came up with these six dimensions.

So we looked at them and said, “It would be great to have a couple of metrics that we could describe, relating to each of these dimensions of system success.” Things like system quality — how good is it? Does it turn on when you turn it on? How’s the up time? How’s the response time? Information quality — sure it turns on, but is there information in there that you want and is accessible and you can use? The third one is usability, and how much usage does the system actually get? If it’s a really great system, people use it a lot, right?

Then there’s metrics for organizational impact and individual impact. Organizational impact like quality and how are you impacting that? And then individual impact, which is the thing I think physicians get very concerned about with an EHR, and it’s also the hardest one to measure. How much time am I spending documenting? Is this taking away from teaching or research? What about all this time doing notes at night when I go home?

We’re still struggling a little bit to figure out how do we measure that type of stuff and make it objective. When people complain about it, can we say, “Yeah, we really have a problem.” Or is this a problem of one instead of a problem of many, and how do we prioritize all those?

When you look at that, in context of the proposed Meaningful Use criteria, do you feel good about where you are?

Oh, yeah. I’m thrilled with where we are for Meaningful Use. In some ways, we got lucky. In some ways, it was vision. But for us, I think achieving Meaningful Use is going to be about crossing some Ts and dotting some Is. It’s very, very attainable for us, and so for that part, I’m really happy.

What are you doing with form factor stuff like mobile computing, or anything creative with nurses?

I don’t know how creative we are. We’ve got our devices on wheels. Pretty standard, like other folks have. We committed to having fixed devices in every patient care room, both inpatient and outpatient.

Being an academic center, we shied away from devices that could walk. Anything that wasn’t tethered. When you’ve got students and residents and people rotating through, our experience is if it’s not tied to the wall, it won’t be in the room for too much longer. It’s the same reason we have ophthalmoscopes tethered to the wall. Because after a year, if we handed out a bunch, they’d all be gone and nobody would know where they are; they wouldn’t be charged. So we focused a lot on fixed devices and trying to have them ergonomic so you can move around and stuff, but you couldn’t walk with them.

I think that’s been pretty successful. We’ve had some good luck with that, although there is always a lot of interest in the latest hand-held stuff. We had a lot of people who were interested in tablets when we started out. Of course that died because tablets weren’t really usable. Now it’s the iPhone and the iPad. I don’t know, maybe Apple will crack that nut a little bit better than some of the early PC tablet people did. We’ll have to see.

The industry is struggling a little bit to digest a couple of recent studies that tried to prove that the clinical information systems don’t improve outcomes or save money. Do you believe that those conclusions are accurate?

Yes, but I think we’re asking the wrong question. When we ask a question like, “Do EHRs work?” It’s kind of like asking, “Does surgery work?” What surgery? For what problem? In who’s hands? With what training? All those details are the things that determine whether or not surgery works, you know?

It’s the same thing with EHRs. Do they work? Well, they can work if you do the right things. The other problem with it is we wrap everything up and call it the EHR, but it’s really not. It’s not the software; it’s a process that we’ve developed. It’s a way of taking care of patients that we’ve codified, to some extent, in an electronic system. But when we look at all the studies that show effectiveness — or lack of effectiveness — what I try to look at is, OK, but why? What was it that made this one place really effective at doing this and not another?

I think informatics, as a science, is still pretty much learning those things. What are the necessary and sufficient conditions for success? It’s obviously not just about having a piece of software that does a certain thing. Otherwise, everybody’s experience would be the same with it. I’m not sure we fully understand … I know we don’t fully understand all the things that make it successful or make it not successful; such that we could develop a checklist and say, “Okay, as long as you do these 50 things, or maybe it’s these 500 things, you’ll be 100% successful.” I don’t think we have that yet.

What would you say your goals are for the next five years?

Oh boy, five years? I seem so focused on today. I think for us, it’s to build out the house that we’re ready to create. We’ve laid a great foundation here to do some really amazing things in medicine with the technology that we have. Over the next five years, I’m really excited to see how we will build that. What will it look like? Who will need to be involved? How will we fully engage caregivers? Operational departments like quality and safety to really see this as a tool that is their tool to use and operate and manipulate to achieve the ends that they want to see. I think that’s the most exciting part.

The other is to continue to refine the system, such that my colleagues who are nose to the grindstone, incredibly busy, by and large see this as a positive thing that enhances their ability to do a good job. Right now we see a lot of variability in people’s opinions along that line and we still don’t fully understand what the factors are that result in that variety of opinion.

I tend to think it’s that we still have a somewhat coarse tool that needs to be refined before people say, “Aha, this just works the way I expect it to. It works like Google, or it works like my iPhone.” I don’t know if we’ll get there in five years, but I’m sure we’ll be a lot closer than we are today.

HIStalk Interviews Phyllis Gotlib

February 17, 2010 Interviews 1 Comment

Phyllis Gotlib is CEO and co-founder of iMDsoft.


Tell me about the company and your products.

iMDsoft was founded in 1996 after a few years of development. We started with an alpha site in Tel Aviv and had our beta site at Mass General and Brigham and Women’s in 1997. Once we got clearance on our products, we decided to move to Europe and also to validate our implementation methodology.

Our first product was for ICUs. We went to Europe and decided on four different languages and four different countries. We received rave reviews.

Our first commercial installation with ICUs was in Lausanne, Switzerland in 1999. We went into the Netherlands in Dutch, Norway in Norwegian, and of course to the UK in English. Since then, we grew all over Europe, came out with a new product in 2001 for the entire perioperative environment — pre-op, inter-op, and the PACUs. We had a partnership with Fukuda Denshi, second largest medical device manufacturer in Japan, in 2000. We went back to the US in 2002 and set up our headquarters in Needham, MA where I spend most of my time.

Since then, we have close to 150 hospitals world-wide with more than 9,000 beds under license. We continue to grow beyond the walls of the ICU and OR as we expand outside of critical and acute care. We have a new product called MVgeneral that goes to the general floor.

We map the entire inpatient workflow in all these departments. Every type of ICU — all adult ICUs such as neuro, CCU, med-surg, NICUs, PICUs, and the entire perioperative environment, step-down, and general wards. We have supporting products that include MVmobile for ambulances and MVcentral, a tele-intensivist product, and others. All of our products share one database and provide a true continuum of care.

Most US healthcare IT vendors have customers outside the US, but most of their business is domestic. Is it an advantage or disadvantage to have a more balanced international footprint?

I see that definitely as a hedge. We started in Europe because, at the time, the R&D was in Tel Aviv and there was a blend of a lot of languages and people that came from the European countries to Tel Aviv. That was an easier way to start the company.

You can see similarities between territories. You can see similarities between the European market and Canadian market and between the UK market and Australia.

The US is different, but when we do user groups, the US customers are really happy to mingle with the European customers and vice versa. We believe in a sharing philosophy. Our US installed base is really high-visibility and very impressive, with Johns Hopkins, Mass General, Partners, Barnes-Jewish, Henry Ford, and so on. In Europe, we also have high-end academic hospitals, community hospitals, and smaller institutions.

They all like to mingle, to exchange protocols, and to share information. For us as a company, it’s definitely a hedge and allows us to lower the risks and to be able to answer the needs of the different regulations and initiatives in different countries.

You’ve described iMDsoft as a disruptive innovator, but I don’t know that many US healthcare CIOs are familiar with the company. Who are your competitors and what are your competitive advantages?

You will hear me quite often say that it depends on the segment and the territory. I would put them in buckets. The competition can be the old medical device companies like Philips and GE. Another bucket would include the bigger guys, like McKesson, Eclipsys, maybe Cerner. The others would be smaller, software-only companies like Picis. In Europe, in every country you can find a local vendor that is really specific.

You can differentiate the competition into OR competition, perioperative competition, and the ICU competition. But of course, I would tell you that we have very little competition [laughs].

Regarding differentiation, definitely I would talk about clinical data granularity. Secondly, I would say decision support. After that, our ability to customize — the flexibility of our products.

One of our fortes is interoperability. A good example is Barnes-Jewish Hospital. We are integrating and interfacing with eight different vendors. Giving you only US examples, at Lehigh Valley Hospital and Health Network, we have a full integration with, at the time, IDX Lastword CPOE, which became the GE product.

Another key differentiator for iMDsoft has always been our ability to impact not just the quality of care and clinical decision-making for our customers, but also to contribute meaningfully to their level of operational efficiency and resource deployment, and ultimately, to make a positive impact on the financial performance of their critical care department.

When I talk about customer impact, it can come from a number of different perspectives that cut across clinical quality, operational efficiency, and cost savings.

When I talk about clinical data granularity, every data item in our system is a user-defined and controlled parameter. They are stored in a hierarchical manner in the database, which allows them to have sophisticated relationships between them. Those parameters can be time-related or non time-related and can be from any type and they will have attributes … for instance, a formula can be a parameter, a drug can be a parameter, a change in position can be a text parameter, and so on.

A good example is saline solution, where the granularity will go down into water, chloride, and sodium. Every time a user gives one cc of such a solution, every minute you can see the trace elements in our system every minute. For instance, you can check the patient’s potassium minute by minute. These things are very important in critical care, where the patients are not eating or not drinking — they get intravenous nutrition or enteral nutrition and also all the volume that they get from drugs is documented.

A 2005 study I read described the use of MetaVision Event Manager to deliver alerts that are based on physiologic and order information in the ICU and the OR. What are the opportunities there?

It’s a huge opportunity. We hear that from all our customers. The Event Manager was endorsed by Harvard Medical School and by most of our hospitals. It’s a real-time decision support, a rules-based engine that provides alerts that can be clinical, administrative, or financial in nature. They can be delivered to the appropriate person and place as needed via screen, telephony, pager, and so on.

I can give you an example. First, we collect all the data. Once the granular data is in our database, you can then put rules on the data. You can write statements, like if-then statements.

One of our hospitals in the United States — I cannot say the name — conducted a study that showed that a certain generic anesthetic was as good as the brand name anesthetic for longer surgeries. The hospital gets reimbursed for the procedure at a set amount, paying for the anesthetic themselves. They did not have a reliable mechanism to remind anesthesiologists to use the generic drug in longer surgeries.

They programmed the alert to remind the anesthesiologist to consider switching to the generic if the surgery has already been more than X minutes. The statement was very easy. The alert took one day to produce, it took them a few days to test it, and in less than a week it was in production. Over a year, it saved them more than $500,000.

That’s an example of ROI using the Event Manager, but since every data item is a parameter, you can also use it to drive clinical improvements. In another hospital in the UK, they managed to reduce their drug costs per patient from $197 to $149 just by increasing generic usage, from 61% to 81% with MetaVision using the Event Manager.

I could go on and on with examples like that, but it’s actually using all the granularity, all the elements, all the parameters that we have in our very rich database and putting rules on top of them.

What about the use of reminder checklists and dashboards?

Our dashboard actually allows us to see the data in a global view, not only on the patient, but also on the unit. Along the way, we’ve also started implementing entire regions. We recently started an implementation for an entire province in Canada, another province in Australia, and also in Norway. Our dashboard provides a global view of a unit of a hospital, a region, or also something more like a network or province.

The checklist is something quite easy. It’s done all over the system. The entire system is rule-based and you can add alerts and mandatory fields. It’s really comprehensive and has all the functionality that is required to provide best practices and to give guidance.

Most of what I’ve written about iMDsoft involved the lawsuits with Cerner and Visicu over intellectual property involving remote monitoring technology. Did that turn out the way you hoped?

We are actually in the midst of our litigation. However, I can tell you that Visicu recently lost against Cerner for the same complaints and Cerner used our prior art to defend itself. So, I believe we are in very good shape.

A recent study, perhaps not very well done, concluded that remote ICU monitoring did not do much to improve outcomes or reduce costs. What was your reaction to that?

It was ambiguous. I’m never happy to see that the competition is doing a lousy job. If you look at the entire market and you see that we have only 10% penetration, we are beyond the early adopters. I need everyone to do a good job because if not, it will put up additional barriers. I know that we have ARRA, the stimulus, other regulations around the world helping us, but still, we need to do a good job.

So, there was something in my heart where I was glad to see that our competitors didn’t do a good job, but on the other hand, overall, that’s not the right thing.

It is interesting because, from our end, we have a study that shows in our tele-intensivist program, a customer was able to reduce the mortality rate by 30% by using MVCentral in their remote ICU.

You have some of the best hospitals in the US as your customers. Is the US market key to your strategy and if so, how will you get the word out?

Absolutely. I think our customers are our best advocates. We are investing in enlarging our channel distributors in the US and I hope that by the end of 2010, we will be able to have a balance between the rest of the world and the US revenues.

Do you think the stimulus incentives will affect your business here?

I think hospitals in the US will have no choice. The government, the payors, and the regulatory agencies have all begun to link clinical performance to reimbursement. It’s a first in the modern history of medicine. US government initiatives, such as PQRI, the various pay-for-performance initiatives launched by large payors, and European government initiatives have all been in the headlines.

Elected officials see these initiatives as crucial to contain health costs and improve quality of care. We at iMDsoft definitely believe the recent trend will continue and the amount of reimbursement at risk for hospitals will grow.

We see also that clinical data management and protocol enforcement now have important financial repercussions and making clinical information systems for critical care an even higher priority. There is not one CIO that doesn’t have this on his radar. They just need to prioritize it, whether it will be on the budget of this year or next year, but it’s definitely on the radar.

The start that everyone was hoping for the last 20 years is actually happening now, not so much because of the carrot, but because of the stick — the penalties and because it is impossible to manage so much data and so much information that is coming from so many different sources without having a clinical information system.

Final thoughts?

We are excited about what we are doing. We have a vision and a passion here. We are in 21 countries, supporting 18 languages, and hope to expand. We would like to continue to be a innovation leader and keep the level of quality of our products and services as we continue to grow.

Founding Sponsors


Platinum Sponsors


















































Gold Sponsors













Reader Comments

  • Kermit: Best [whatever] list...an easy way to get comments! I agree with all entries so far, and will suggest the addition of Bo...
  • ex-HHC: Today's is the best comment section ever in my decade+ reading Histalk! My vote is Geddy Lee....
  • John: I know, not quite rock and roll but Jaco Pastorius was one of the premier bassists of that time period....
  • Eddie T. Head: What? Not even a mention of Steve "'Arry" Harris, founder of Iron Maiden? They were a highlight of HIMSS 16, kicking ...
  • Agnes: Happy belated birthday, Mr. Weider!...

Sponsor Quick Links