"Still, there’s often confusion about who is caring for the patient ... " Playing off of Jimmy the Greek's comment,…
Filling the Healthcare Data Glass: The Glass Doesn’t Need to Stay Half Empty
By Alex MacLeod
Alex MacLeod is director of healthcare commercial initiatives for InterSystems of Cambridge, MA.
In recent years, there has been a lot of talk about the unfulfilled promises of artificial intelligence (AI) in healthcare and concerns about how to effectively incorporate it into practice and realize immediate value. There is a real “glass half empty” mentality at play due to false starts and over-ambitious expectations for AI adoption and commercialization. But that doesn’t need to, and shouldn’t, be the case.
Google’s hospital partnership to collaborate on algorithm development using patient records for AI development is a strong sign of healthcare AI’s imminent proliferation. Gone is the barrier of highly fragmented patient data. This is a significant market shift, and other giants in tech and healthcare will follow Google’s lead. The question now is, what can and should the healthcare IT industry do to prepare? We will answer that by looking at three core areas – data, patterns, and areas of caution.
AI in healthcare has had positive growth in recent years, but the meaningful application of AI products (FDA-approved AI products) and the widespread application of data to the decision-making process has lagged, according to a recent study published in the Medical Futurist Institute. There have been major recent advances in sensor technology, allowing for a broad range of devices that help inform patients about their health or fitness and warn about risks. The sensors generate raw data, but the interpretation of it is based on AI analysis, which hasn’t developed at the same rapid pace.
IT departments, payers, providers, and patients are overwhelmed with the high volume of data generated on a daily basis and need to better articulate their end goal for its use. To do so, they need to pay close attention to their current processes and determine what can be done differently and what needs to change in order to be able to analyze data and apply it to future decisions.
The biggest questions those in healthcare face in regard to health information are:
- What do we do with all this data?
- What is most important to analyze?
- How can it be made actionable? (i.e. can it be used to become compliant with regulations?)
To answer those questions, we need to start by understanding what the data represents and asking a few more questions. Is the data set composed of lab results, physician-collected, or patient-submitted data? Why was it generated and collected in the first place?
The answers are typically more straightforward in other industries than healthcare. That’s why it is important to take a close look at the data and identify patterns and similarities. Analysis in healthcare AI is different from other consumer-facing algorithms.
Healthcare AI has less algorithm-friendly base data compared to social media or online shopping, for example. Healthcare algorithms work with complicated inputs of clinical notes, medical imaging, and sensor readings. Outcomes are relatively well defined in non-healthcare AI settings, most commonly in terms of attention or purchase. In healthcare, outcomes have time and severity dimensions on top of opportunity for interference with other effects, not all of which can be stratified through raw statistics.
Current effective applications of AI in healthcare include the use of ML tools in triage practices and administration. For example, what makes it effective in triage is how AI nuances the health system’s basic risk scoring systems as a way to identify patients who need immediate attention or who require higher acuity resources and pathways.
That said, patients must consent to their data to be applied to healthcare AI algorithms, and to provide value, the data must be made actionable. It must be clean, comprehensive, and normalized data where there are no duplicate records, formatting errors, incorrect information, or mismatched terminology. This gives those analyzing the data complete confidence in how and why it was curated.
Collecting data always introduces the risk of the information being “repurposed,” a possibility spotlighted when fitness tracking app Strava released a dataset of 3 trillion distinct GPS readings that inadvertently exposed US military bases in Afghanistan. Modern bots, and to some extent even legitimate social media marketing tools, are making efficient use of analytics and AI to game the platform’s algorithms in order to attract more views, clicks, and likes. But, when such technology ends up in the wrong hands, the focus may be on spreading misinformation rather than the intended use.
As with most technology, discretion is key. Collect and analyze only the minimum necessary. Don’t invite scrutiny over private data or enable access to it. Remain diligent in your data practices.
It’s understandable why people see the glass as half empty, but we have reached an inflection point in healthcare AI, a point at which we can add water to the glass.
To add to the glass and fully benefit from the anticipated results, we should embrace incoming regulation and think hard about self-regulation measures. Healthcare IT practitioners should closely monitor how laws and oversight will adapt in real-time, similar to as we have seen with the FDA Digital Health Innovation Action Plan. As Google’s big step forward in healthcare AI development signals a new level of digitization of health, we can expect changing attitudes towards healthcare AI, including an uptick in trustworthiness and increasing differentiation from other categories of consumer AI.
AI in healthcare has strong potential if we harness it correctly. In the right scenarios, AI augments the work of healthcare providers and doesn’t replace them as long as we maintain a little bit of human intelligence to complement the artificial.