I had occasion recently to talk with a personal liability attorney, fortunately just socially and not professionally. He had some questions for me about the role of artificial intelligence in healthcare. Fortunately, I was able to point him towards a recent editorial in the Journal of the American Medical Association.
The article has a nice summary of the concerns that many in practice have about AI: communicating recommendations without the underlying rationale; poor training data sets used in the development process; and failure to reach an accurate result or recommendation. The JAMA article notes that case law on AI-related liability is lacking, but existing law can be extrapolated to cover these situations.
The authors’ examples support the use of AI as an adjunct to the existing decision-making process in order to prevent additional liability. However, as AI becomes engrained as part of the standard of care, this approach may necessitate more trust in AI systems at the point of care, in order to prevent the physician from making the error of underutilizing technology that could be of benefit. It’s a complicated equation, for sure.
The VA recently announced planned steps to increase data sharing with non-VA providers using the Veterans Health Information Exchange. They’re going to shift the current opt-in protocol to one where opt-out is the norm, so patients no longer have to provide a written release for the VA to share their data electronically. A quote from the VA in one of the articles I read about it states that community providers and organizations must have partnership agreements and be part of the VA’s trusted network to receive VA health information. I hope they meant to say that you have to be part of the network to receive information electronically, unless the VA isn’t covered by HIPAA, which allows providers to share information for Treatment, Payment, and Operations without a specific release.
The HIE plans to share information including: problem list, allergies, medications, vital signs, immunizations, laboratory reports, discharge summaries, medical history, records of physicals, procedure results such as radiology reports, and progress notes. Veterans who don’t want their data shared can still opt out, but they will have to be either all in or all out – previous mechanisms which allowed some data to be shared but not others will no longer be permitted.
Speaking of veterans, telehealth middleware provider Medici has launched “Operation 11/11” to provide no-cost virtual consults to all US veterans on Veterans Day, November 11. Proof of military service is required and participants can pre-register for services from 8 a.m. to 8 p.m. in their time zone on November 11.
Medici is welcoming four military advisors for the initiative and has also partnered with 2nd.MD to provide virtual second opinions for veterans with complex patients. Medici has an interesting model where providers pay to be on the platform and set their own rates for virtual visits. I can imagine it might be compelling for independent physicians, but struggle to see how it plays for the majority of physicians who are in employed situations.
I was intrigued to hear about Black + Decker’s new automated medication management and home health care assistant device, Pria (first covered on HIStalk nearly a year ago). It’s the first foray into healthcare from the people who brought us the Dustbuster. The voice-activated device tracks and schedules up to 28 medication doses along with reminders and timely dispensing. It also allows patients to have access to family members or caregivers using a built-in camera for video calls. It can also enable reminders for drinking water or other key health-related activities. The product is pricey at $600 plus a $10 monthly subscription.
I recently became aware of a club I have no desire to be a member of: telehealth providers who have licenses in all 50 states. Becoming licensed in a handful of states is enough work, so I can’t imagine wanting to have dozens of applications in process. The CNBC piece profiles a couple of telehealth providers who advocate for the approach as a way to treat patients more effectively particularly patients in underserved areas.
Data from the Federation of State Medical Boards indicates the club is pretty small, with only 14 physicians licensed everywhere as of 2018 data, up from six in 2016. The number will likely be higher for 2020 given the overall growth in telehealth. One interviewee notes the cost of procuring 50 licenses is around $90,000. In addition, there are annual fees to maintain them. If providers ever surrender a license, there’s also a process to explain that in future license renewals in other states, so if you’re going to do it, you had better be ready to maintain it. I’ve found telehealth compensation for physicians to be lower than pay rates in brick-and mortar situations. Unless you have the temperament to conduct, complete, and document visits every couple of minutes, I don’t see a lot of physicians opting for this type of practice.
An interesting potential use of artificial intelligence was detailed this week in The Wall Street Journal: prediction of marital arguments. Engineers and psychologists are using speech patterns, physiological data, and acoustic / linguistic information to detect potential conflict. One described use case is sending a text message to a highly stressed individual, warning them of an imminent conflict so they can take action.
The original 2017 study followed 19 Los Angeles couples and tracked data such as heart rate, perspiration, and activity levels. A phone app prompted them to document hourly reports on their feelings and also recorded speech content, pitch, and frequency in taking a three-minute recording every 12 minutes. Researchers were able to detect conflict with nearly 80% accuracy. The original data was gathered during a one-day period, which is a significant limitation along with the size of the sample.
A more recent investigation by the same researchers looked at 87 couples, using speed of speech and intonation to detect conflict. The research sounds promising. I hope they consider the next logical investigation, which would be parent-teenager interactions. I’m sure that would be a target-rich environment for conflict identification. Or, we could install such systems in healthcare IT conference rooms across the country – certainly there’s some conflict there!
What do you think about AI identification of conflict? Leave a comment or email me.
Email Dr. Jayne.