Home » Interviews » Currently Reading:

HIStalk Interviews Harjinder Sandhu, CEO, Saykara

October 12, 2020 Interviews No Comments

Harjinder Sandhu, PhD is founder and CEO of Saykara of Seattle, WA.

image

Tell me about yourself and the company.

I’ve been working on artificial intelligence in healthcare for about 20 years, starting before it was cool to call it artificial intelligence. I transitioned from a role as a professor of computer science into entrepreneurship. A friend and I co-founded a company doing speech recognition and natural language processing in healthcare. We sold that company to Nuance Communications and I spent several years as the VP and chief technologist in Nuance’s healthcare division.

I founded Saykara a few years ago with the idea that doctors should be able to focus on seeing patients and that we can build AI systems that can capture what they say and automatically enter the pertinent data into the chart. That’s what Kara, our virtual assistant, does.

Healthcare encounters involve a complex, two-way conversation with minimal guidelines, structure, and length. What technology advances allow turning that conversation into encounter documentation whose accuracy is high enough to avoid manual cleanup afterward?

Two things. One is that we started out using a “human in the loop” model, which means that behind the AI is a person who will make sure that the system gets it right. Doctors get a good experience from Day One because the AI picks up a lot, but then humans not only help correct it, but also teach it.

The second thing we do is on the AI side. AI is advancing at a very rapid rate and our goal is to get to a solution that is purely autonomous, without any human in the loop. We are doing that by teaching our system how to recognize specific clinical pathways that are the subject of what the doctors are actually doing with their patients and start interpreting along those pathways. That helps a lot in terms of figuring out what the system should key in on at any given encounter.

How is that use of behind-the-scenes humans to correct and teach the system different from just hiring scribes?

In the short term, it may look very similar to the end user, where they get a clinical note or an order put into the EHR. In the longer term, as the system gets better and better, we can provide that same service at a much lower cost, but also go well beyond what a scribe would be able to do. Our system is learning to predict what’s happening in an encounter, to put specific nudges in front of physicians, and then along the way, we capture everything in the form of discrete data. We are able to populate and construct data in a way that is virtually impossible for people to do without a lot of effort and cost.

What does a typical patient encounter look like to a provider using your system?

There is no “one size fits all” for all providers. Different providers use the system very differently. But a typical experience would be that during the encounter, the physician turns on Kara on their IPhone app. They walk in, turn the app on to start listening, and then they just interact with their patient.

A lot of providers like doing what we call reflective summarization to make sure that the system captures the right things. They will speak, either during the encounter or afterward, to tell Kara, here are the key points that came up in this conversation or the things I did in the physical exam or in the assessment plan. They let the system key in on all of those things and make sure those are the core of what gets documented.

How does EHR integration work to get the information into the chart?

That varies a lot by EHR. Some EHRs are not geared towards capturing anything more than a blob of text as if it were from a clinical note. Others have granular APIs that allow you to take specific parts of what is being communicated and populate it, uploading diagnoses or other information that needs to go into registries. We find that the integration experience varies a lot, but we capture on our side as much detailed data as we can, then push into the EHR as much as the EHR is able to consume in the form of APIs.

What do users cite as the biggest benefit?

The biggest thing that our physicians say is that it eliminates pajama time. That’s the biggest thing that users want. Physicians are spending hours in the evenings trying to close their charts. We eliminate that almost across the board for all of our users.

Physicians like the idea that whatever they’ve done in that encounter, they can rely on the system to create very accurate rendition. Because we have humans behind the scenes helping the system and making sure it got it right, physicians get accustomed to the fact that the system creates very accurate information. They can mentally offload what they are doing and then move on to that next patient.

How long does it take from the first time a physician turns on the system until they feel that it is benefitting them?

Most of the time, it’s on the first day. A provider either types during the encounter, which draws their attention away from the patient, or they spend their evening time trying to close that chart. Their first note on the first day they start using the system will be highly accurate. Providers literally tell us, “This changed my life on Day One.” Largely because, all of a sudden, they found that they weren’t sitting there typing during that encounter or that evening they went home and they didn’t have those charts to do.

The value is very, very fast. And of course, behind the scenes, the AI is learning and getting better and more autonomous over time. That part takes time, but the immediate value for that physician is on Day One.

Having spent time at Nuance, how would you compare Kara to their ambient intelligence product?

Ultimately, we are trying to solve the same problem. The proof is what is happening behind the scenes and how intelligent the systems are getting behind the scenes, because Nuance also uses human scribes behind the scenes. We started four years ago at Saykara trying to solve the hard NLP problems to get the systems to be fully autonomous. We are on the cusp of releasing models that are going to be fully autonomous for specific pathways. The real distinctions are coming in the next little while.

Otherwise, doctors are oblivious to what happens behind the scenes. They just see a note that comes back to them.

We are training our system to do a lot more than clinical notes, such as clinical guidelines, coding, providing nudges, and predicting what is about to happen in that encounter. We are starting to put some of that in front of physicians, and you’ll start seeing those differences.

Since the clinician isn’t aware of how much of the final result was delivered by the AI or the scribe, is it the company rather than the user that will get the benefit of moving toward better-trained AI?

It’s a bit of both, actually. Certainly we benefit as the system becomes more autonomous, but there’s a huge benefit for the providers. I look forward to a point where they can see what the system is doing in real time, and we are starting to put some of those things in front of the physician. They can see guidelines and what information they need to capture during this particular encounter to cover it. Physicians are asking about those kinds of things.

The system is learning to interpret these encounters. We can teach it to figure out for the subjective part when the patient says “shoulder pain” to consider what questions the physician would typically ask a patient about shoulder pain, or the kinds of responses that a patient might give.The system is gearing up to be able to communicate directly to the patient to collective the subjective information before the encounter begins, which will offload work from the physician. Ultimately, that subjective information is really the patient’s voice, and it’s coming from them anyway.

Sometimes companies that offer a physician-targeted product struggle with creating a marketing and sales organization that can reach out to an endless number of practices to make sales. Who is your target customer and how will you reach them?

We get users across all tiers of the healthcare ecosystem, from large health systems all the way down to small group practices. I would say the sweet spot for us today is really large specialty groups. That’s where we find rapid uptake and a great deal of success. Within the large health systems, we find specific physician groups reaching out, particularly in primary care, for example, where burnout is a big issue. And then of course the small group practices.

From a marketing perspective, we’ve focused our efforts on reaching out with a message of, “We solve the problem of burnout.” A lot of the sales effort ends up being directed at the large specialty groups, but we get a lot of the health systems and the small groups coming along just because they feel that message and they want solutions for their physicians.

I appreciate your transparency in describing how humans are involved in your offering since some companies, especially those who yearn for a tech company valuation, market a proprietary black box that performs magic. Are companies trying too hard to get AI to do everything instead of accepting that it could be brought to market faster and less expensively by just shooting for 90% and letting humans lend a helping hand?

It depends on the area that AI is being applied in. When it comes to conversational AI, by which I mean listening and interpreting conversation, that’s an extraordinarily difficult AI challenge. We are making pretty substantial strides in that right now, but there are areas where you can apply AI where the AI systems can actually do a pretty good job without needing any kind of human power. But certainly in this space today, we are just at the infancy of NLP.

NLP has been around for a long time. I’ve been working on it for 20 years. But I would say just in the last year, we’ve seen so many gains just within our own system and across what’s happening in the industry outside of healthcare, even in NLP. But where I can see over the next couple of years, a lot of these solutions, our solutions, are going to be completely autonomous. But right now, that’s the right fit for this space today. For other industries, other applications of AI, it may or may not be. You  have to pick and choose the strategy used for what you’re trying to do.

Where does the technology and the company go over the next 3-5 years?

I often use the analogy of driverless vehicles. Ten years ago, people thought autonomous vehicles were a distant future, and nobody gave it much thought. Suddenly we wake up one day and there are autonomous vehicles on the road. They have drivers behind the wheels, but the vehicles are starting to drive themselves. Now you can go a pretty long distance without actually touching the wheel.

I look at AI in healthcare in that same kind of way, where we have the human in the loop. The AI is learning from what those humans behind the scenes are doing, but what is more interesting is that it is learning from what the doctors themselves are doing. If you put a camera on a doctor’s shoulder, connect it to a really intelligent system, and tell it to watch what the doctor is doing — how they’re interacting with the patient, what kinds of questions they are asking, what they do in their physical exam — and connect this to the EHR whose data the physician is using to make their decisions, you are building, over the long term, an intelligent system that can actually understand medicine. 

The scribing part of what we’re doing is just the cusp, the tip of the iceberg. The more important and more interesting trend is that, over the next 3-5 years, these systems will actually start understanding the process of providing care to patient. We will be able to supplement and assist doctors in ways that we haven’t really thought about today. That’s the part that I get excited about.

Do you have any final thoughts?

We are extremely early in the AI revolution in healthcare. Really, it hasn’t been a revolution. We are augmenting processes in healthcare, making them more efficient, and making physicians happy. Not just us, but other companies in this space. But what we’ve seen with AI technology in other industries is that it reaches an inflection point, where the AI begins evolving much faster and starts being able to do more in a short span of time than people would have imagined possible. I think we are almost at that inflection point in a lot of processes within healthcare. We will see, over the next couple of years, incredible disruption to the business of healthcare, and in a good way.

A core part of that is natural language processing.  So much of healthcare, so much of medicine, is communicated by voice. When you can do a really great job of interpreting and understanding what’s being communicated, what never actually makes it into the medical record or doesn’t make it into the medical record in a systematic, discrete way, you’re able to understand how to communicate with doctors on their own terms. Not in the way that you as a interface designer want doctors to interface with your system, but the way the doctors would naturally interact with other doctors or with a patient. You can interact with them in those terms. You can interact with patients on their own terms as well. That revolution is going to create a new platform and new capabilities that we can only start dreaming of today.



HIStalk Featured Sponsors

     

Text Ads


RECENT COMMENTS

  1. Giving a patient medications in the ER, having them pop positive on a test, and then withholding further medications because…

  2. Apple legacy? Seems I heard that before. Say around 1997. Jobs put out a 15 min video where a guy…

  3. Cmon, publishing and writing about an Only Fans and TikTok user is tabloid news. Its junk news, not up to…

  4. "Healthcare startup Particle Health has been battling electronic health records giant Epic Systems all year. Now, the startup just raised…

  5. I'd never heard of Healwell before and took a look over their offerings. Has anyone used the products? Beyond the…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.