Readers Write: HTI-1: A Step Towards Demystifying AI in Healthcare
HTI-1: A Step Towards Demystifying AI in Healthcare
By Ryan Parker
Ryan Parker is a consultant and healthcare informatics graduate student.
The integration of artificial intelligence (AI) in healthcare has been a double-edged sword, offering the potential for revolutionary advancements in patient care while simultaneously posing significant challenges related to data transparency, algorithmic bias, and the elusive nature of AI decision-making processes. The Office of the National Coordinator for Health Information Technology (ONC)’s latest regulatory effort, the Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1), attempts to address these challenges head on. However, the question remains: Is it enough to solve AI’s most pressing issues in healthcare?
The challenges of implementing AI in healthcare extend beyond the technical sphere, often rooted in what I call “the human problem with artificial intelligence.” This encompasses issues like incomplete or biased datasets, unconscious biases leading to unintended outcomes, and the notorious “black box” problem where the reasoning behind AI decisions remains opaque. While the HTI-1 final rule aims to tackle these issues, its scope and impact merit a closer examination.
A significant hurdle in AI implementation is the siloed nature of patient data across the US healthcare system. This fragmentation hampers the development of AI models by limiting access to comprehensive datasets necessary for training. Although HTI-1 does not directly address data silos, you can see how this final rule aligns with ONC initiatives like the Trusted Exchange Framework and Common Agreement (TEFCA), aiming to foster a more interconnected health information landscape.
The problem of algorithmic bias and unintended outcomes—referred to as the “alignment problem” —is acknowledged tangentially in HTI-1. While the rule doesn’t mandate specific measures to eradicate biases, it underscores the importance of transparency and accountability in AI development and deployment. This approach suggests a recognition of the systemic nature of biases within AI algorithms but leaves the responsibility for addressing these issues largely in the hands of developers and implementers.
Perhaps the most significant contribution of HTI-1 is its attempt to illuminate the black box of AI decision-making. By identifying 31 source attributes that must be accessible to end-users—ranging from input variables and the purpose of the intervention to external validation processes—the ONC aims to increase transparency. This initiative is crucial, as studies have shown that healthcare providers are often reluctant to trust AI systems that lack explainability, regardless of the potential benefits.
The emphasis on transparency aligns with the sentiment expressed by Christian (2020), who noted that “the most powerful models on the whole are the least intelligible,” and Ehsani (2022), who highlighted the significant risk perceived by healthcare providers when faced with unexplainable systems.
It’s important to note that HTI-1 does not create a requirement for health systems to implement any specific technology related to decision support interventions (DSIs). Instead, it sets a framework for how AI should be integrated and evaluated within certified healthcare solutions. This approach allows for flexibility and innovation but also places the onus on health systems and developers to navigate the complexities of AI integration responsibly.
As Borgstadt et al. (2022) aptly put, the implementation of machine learning algorithms supporting clinician workflow can enhance the quality of care and provider experience, ultimately leading to improved patient outcomes. However, the journey toward fully harnessing AI’s potential in healthcare is fraught with challenges that require ongoing attention, intention, and effort. At the end of the day, HTI-1 is offering a new tool to healthcare providers, the real impact of which can only be determined by themselves.
That colorful bull reminds me when Cerner had a few of these made and mooved them around KC. it was…