Home » Interviews » Currently Reading:

HIStalk Interviews Charles Corfield, CEO, NVoq

July 24, 2017 Interviews 1 Comment

Charles Corfield is president and CEO of nVoq of Boulder, CO.


Tell me about yourself and about the company.

I grew up in England. I came over to America as a graduate student, and like many immigrants, I stayed. I have been in the high tech world for the last few decades, doing a mixture of early-stage companies and then later-stage buyouts and spinouts. My current day job is CEO of a company called nVoq in Boulder, Colorado.

I forgot to ask you last time we talked – did you ever finish your PhD in astrophysics?

No, I did not. I was starting to write the dissertation and then I got distracted by startup land. The irony is that some years ago, on a visit to Cambridge University in England, the then-department chairman said that if I ever get bored with the commercial sector, he would have no problem finding a slot for me as a post-doc there in spite of the missing dissertation. Maybe we could find it behind a hot water pipe or something like that. [laughs]

I’ve tried and failed previously to get you to admit that you are the father of Siri, so I’ll ask you this question instead. Are you surprised at the level to which speech recognition has reached the consumer appliance level?

Not really. Although speech recognition per se has been around for a few decades, it is nice to see that has matured to a point where companies are willing to take the risk and put it into a consumer environment, where of course you have no idea what’s going to come at you. It’s nice to see that.

By and large, it works. It also affords the consumers of it a certain amount of humorous surprises at some of the results they get, which are no secret to people who’ve been messing with speech recognition. But it’s great to see how far it’s come.

It is also interesting to see how the mental leadership in speech recognition has very much been picked up by the major platform vendors such as Microsoft, Google, Apple, Baidu, and others. Even Facebook, which we have seen publishing papers on speech recognition. It has definitely come a long way.

Which consumer speech recognition technologies do you personally use?

Actually, very few. For most of my life as a consumer, I’m extremely old-fashioned. I try and avoid talking to these systems because I’m usually very transactionally-based. Sorry to disappoint you with not taking sides. [laughs]

That just added to your legend. Where do you see speech recognition going next, especially as new human interface technologies such as virtual reality ramp up?

I think the ability to do some command and control is still largely an unworked area in the enterprise sector. If we take your example of virtual reality, you can imagine that surgeons and other healthcare professionals will find themselves in this sort of virtual reality zone. It may turn out to be an interesting hands-free zone. The ability to speak to the environment around you may be a more natural interface. This will be one where we’ll see a lot of experimentation.

We’ve also seen some speculation out there about whether, say in a hospital environment, you might find something like an Alexa device able to come into the point of care and physicians able to interact with it in some fashion. We might see somebody like IBM, who has been working hard on Watson, may be able to come up with something like that.

What action items or analysis did NVoq undertake following Nuance’s malware-caused extended cloud services outage?

If I can step back a bit to before that incident, malware has been around quite a long time. As a company, in terms of our info security practices, we’ve liked the discipline that PCI data security standards … PCI stands for payment card industry. Before healthcare was worried about HIPAA, HITECH and so forth, the payment card industry was very worried about fraud. They evolved a set of 12 practices and you can get yourself audited for your adherence to these practices. As a company, we’ve been having PCI audits performed on us for years.

As to the more current outage at Nuance, in terms of lessons people might want to take from that, it is important to stay up to date with patches that are released by the system vendor, such as Microsoft and others. It is quite possible that they were behind on that and somebody clicked on the wrong thing in an email and then, what do you know, you’re having a very bad day at the office.

From our perspective, you do want to stay up to date on whatever the latest patches are being released by people. You also want to have what you might call defense in depth. You should always operate from the presumption that somebody, somewhere is going to click on something in an email and you’re going to be infected by something. What are the obstacles that you’re putting in the way of that malware so that it can’t propagate and wreak the havoc that we’ve seen in that incident?

We do things like having, if you will, air gaps between systems, segregating networks, systems primed to shut down immediately on or cut off access immediately if they detect something fishy, and various other what you might call low-tech methods. All designed to make it much harder for malware to spread and wreak havoc.

Defense against malware is not necessarily having to become an expert in the rocket science or the black arts of whatever these hackers get up to. A lot of it is just a discipline around daily housekeeping. For readers of your column, start with the simple things. Don’t over-engineer. Consider the social engineering ways by which things come in. The best way of getting malware into an organization is through an email which looks like it comes from a highly-trusted individual about an extremely plausible subject. The email that just seems totally innocuous — that’s the one that you’re going click on. Then you’re going to have a really bad day.

The other challenge for Nuance is trying to keep millions of customers updated about their downtime. Any lessons learned there?

Goodness, that’s a large question there in terms of the impact on the users. [laughs] I was a little surprised that they didn’t seem to have fail-over systems. In other words, if you have a major outage in one data center, you should be able to continue providing service for the entire customer base from isolated, separate data centers. That was a little surprising.

In terms of communication, an additional problem they faced was that their own email system was infected. There was a risk there that their customers were actually being sent emails with malware in them as well, which is a difficult problem for them to have.

But the take-home point for everyone else is that you need redundancy in systems so that, even if you have a primary production site, you can shut it down and continue without loss of service to service your customers from backup centers.

Are clinicians more interesting in going beyond dictation to use their voices to navigate systems?

Oh, yes. If you take users of a laboratory information system like pathologists, there’s a great case there for when they are dealing with sample specimens and what have you, they really want to operate hands-free. What their hands have been on, they don’t want to get that anywhere near their keyboards. [laughs] There’s a reason it’s called the grossing station. That’s a great example of voice-powered command and control.

We also find that there’s a lot of usage in things which are not necessarily voice-based,. You can use your voice to drive. But we’ve just found that, with EHRs and other similarly very complicated systems, the very lightweight automations we bring – sometimes people call them robotic process automations — are a real life-saver to them. In a recent customer survey we did, it was something like two-thirds of the respondents said we were saving them an hour or two a day. That’s not just speech – that’s around the automations.

Everybody’s talking about artificial intelligence and now we’ve got this idea of chatbots having some application in healthcare. Where do you think that part of human-computer interaction is going?

I think in general, we’re at something like the top of the Gartner hype curve on artificial intelligence. It’s a very attractive narrative. The rise of the GPU — graphical processor unit, prime case would be in video — they’re having an enormous success at the moment on that. There’s a lot there for artificial intelligence to tackle.

But if I might so put a pin in the bubble here, these neural networks are essentially nothing other than brute force programming. You just have a computer carry out the zillion steps, throwing everything you can at a problem. It’s a very tedious, iterative process. It’s not quite as rocket science and glamorous as you might think.

That being said, there are clearly problems which lend themselves to just throwing a lot of computing power at it. You can get some pretty good results. You’ve seen a lot of progress in things like image recognition and classification. We ourselves are using neural nets as the basis of speech recognition. But I think some of the more exotic applications people have talked about will be a while coming because there’s still a long way for these neural nets to go before they can really cover the gamut of human behavior cultural assumptions.

Remember, the human brain has typically been on the planet for a few decades, busy acquiring experience, whereas the neural net is something we’re trying to train up in a matter of days or weeks. It has nothing like the range of experience that a human being has. A child by the age of three or four has already heard tens of millions of words in all sorts of different contexts. That child, in some sense, is light years ahead of the best speech recognition neural net.

It’s a very promising area and we’ll see a lot of good things come out of it, but I would urge people not to get too carried away by the hype. Because after the hype comes the trough of reality.

When we spoke three years ago, you predicted that the most attractive health IT investment would be workflow tools running on top of EHRs. Did that pan out and what do you see happening next?

Yes, I think that is very much panning out. The big iron has gone in, and now the next question is, how are you going to get your return on it? We saw this with enterprise resource planning software and CRM software. There is a lot of opportunity for innovation here, to really hone particular work cycles or delivery methodology. We’ve really just scratched the surface there in healthcare.

You’re a pretty fascinating guy. You’re a centi-millionaire, you’ve climbed Mount Everest, you run 100-mile races, and you’ve started tech companies that developed technology that is used all over the world. You also bake your own bread and study Yiddish. What are your lessons learned on living a full life?

[laughs] Always be curious about things. Never lose that sense of curiosity. When I look at new areas to try my hand at, the most important thing is to get stuck. It’s when you get stuck that you make progress.

HIStalk Featured Sponsors


Currently there is "1 comment" on this Article:

Founding Sponsors


Platinum Sponsors



















































Gold Sponsors













Reader Comments

  • AnInteropGuy: Tony, Where to start? -ACA did not mandate an EHR. -There is little to no interaction between the EHRs and the go...
  • TooMuchBull: Fact: The Affordable Care Act did not mandate EHRs....
  • Tony Boyer: If doctors are so unhappy with EHR entry, they should be more aware of the fact that the so-called Affordable Care Act (...
  • JD: Not invoking the Defense Production Act by now to create more PPE is absolutely insane (it's been nearly 5 months since ...
  • John: The new interoperability regulations that were promulgated in March are like any other regulations, they are only as goo...

Sponsor Quick Links