Home » Readers Write » Recent Articles:

Readers Write: Lessons from the ChatGPT Health Debate

February 23, 2026 Readers Write No Comments

Lessons from the ChatGPT Health Debate
By Robert Stewart

By Robert Stewart is CTO of Arbital Health.

image

A recent column by Geoffrey Fowler in The Washington Post that describes his disappointing experience with ChatGPT Health sparked discussion in the health IT community. While many remain optimistic about the long-term potential of platforms such as ChatGPT Health and Claude for Healthcare, Fowler’s piece highlights issues that healthcare leaders, clinicians, and technologists should examine carefully.

Variability and inaccuracy are not unique to large language model (LLM)-based systems. Many clinical diagnostics have known false-positive rates, and repeat testing is routine when results are unexpected. Clinicians themselves may reach different conclusions when presented with the same clinical information months later. Medicine has always operated within a probabilistic framework.

What is different with LLM-driven systems is their non-deterministic behavior when given the same input repeatedly. Identical prompts can generate materially different responses. Fowler demonstrated this when ChatGPT assigned his cardiac health scores ranging from a B to an F using the same underlying data. That level of variability can cause confusion or anxiety when applied to personal health interpretation.

Many consumer health AI tools are built on retrieval-augmented generation (RAG) architectures, in which the model is grounded using user-specific information such as medical records or wearable device data. Even when anchored to structured inputs, however, the LLM’s narrative interpretation can still vary, reinforcing the need for clinician oversight and appropriate guardrails when deploying these tools in consumer health settings.

It’s also important to recognize the potential psychological impact of these tools. Researchers such as Eric Topol caution against indiscriminate screening of asymptomatic individuals because it often produces “incidentalomas,”(findings that lead to unnecessary follow-up testing or treatment without improving outcomes. Consumer AI health scoring systems risk amplifying this phenomenon by continuously surfacing probabilistic interpretations in the absence of appropriate clinical context.

Wearable Data Challenges

Wearable device data introduces another layer of complexity. Anyone who works with longitudinal wearable datasets understands that the signal-to-noise ratio is inconsistent. Devices are removed for charging, replaced every few years, or switched across vendors that have different calibration baselines. Environmental and behavioral factors such as travel, altitude changes, illness, stress, or sleep disruption can produce statistically significant physiological changes that an AI system may misinterpret without broader context.

Jessilyn Dunn, PhD and her lab at Duke University have conducted extensive research that uses machine learning and statistics to extract valuable insights from consumer wearables, but the work remains challenging. Even highly targeted machine learning applications, such as arrhythmia detection platforms developed by companies like AliveCor, still operate with non-trivial false-positive rates. Wrapping a general-purpose LLM around wearable data without similarly rigorous modeling layers is unlikely to deliver clinically reliable outputs.

Security and Privacy Considerations

As consumer AI health tools evolve, security becomes increasingly important. Anyone who uses ChatGPT, particularly those who are sharing sensitive health information, should enable multi-factor authentication (MFA), which is one of the most effective controls for reducing account compromise risk.

Users should also recognize an important regulatory distinction. Information that is entered into consumer AI services is generally not protected under HIPAA. OpenAI’s enterprise offering, ChatGPT for Healthcare, is designed for HIPAA-covered environments and supports Business Associate Agreements (BAAs), but consumer versions operate under different legal frameworks.

The Takeaway for Health IT Leaders

The lesson from Fowler’s experience is not that consumer health AI lacks value, but that context, governance, and clinical integration matter. Non-deterministic systems that interpret noisy consumer data can easily generate variable outputs that users may misunderstand as clinical conclusions rather than probabilistic insights.

For health systems, payers, and digital health innovators, the near-term opportunity lies in combining LLM interfaces with validated predictive models, strong clinical workflow integration, and transparent communication about uncertainty. Without those guardrails, even well-intentioned consumer health AI tools risk creating confusion rather than clarity.

Readers Write: Doing Everything For the Patient, Not To the Patient

February 23, 2026 Readers Write No Comments

Doing Everything For the Patient, Not To the Patient
By Nassib Chamoun

Nassib Chamoun, MS is founder, president, and CEO of Health Data Analytics Institute.

image

“Do as much as possible for the patient and as little as possible to the patient.”

That single sentence, written by Bernard Lown, MD in “The Lost Art of Healing,” should serve as a universal guide to thinking about medicine, caregiving, and what it truly means to heal. Dr. Lown was my mentor beginning in my early 20s and remained a close friend until his death in 2021 at age 99, He was decades ahead of his time. He believed that medicine should integrate scientific rigor with moral imagination, and that clinical excellence without compassion is incomplete care.

Today, his words feel less like a reflection and more like a challenge. Our population is aging rapidly. Older adults are the fastest-growing consumers of healthcare services.

As more patients approach the later stages of life, the central question facing clinicians, health systems, and policymakers is not whether we can do more, but rather if doing more truly serves the patient. Increasingly, the evidence suggests that quality of life, not simply quantity of life, must be the defining outcome.

This is not a new conversation. In 1974, Balfour Mount, MD, who is widely regarded as the father of palliative care in North America, established the first hospital-based palliative care unit at Montreal’s Royal Victoria Hospital. Since then, the field has grown steadily. Decades of research demonstrate improvements in symptom control, patient and family satisfaction, alignment of care with patient goals, and, in many cases, lower healthcare utilization and costs.

More recently, the World Health Organization issued a call-to-action urging health systems to expand palliative care access. Not only for humanitarian reasons, but also as a sustainable response to the use of our healthcare resources.

Organizations such as the Center to Advance Palliative Care (CAPC) have worked to standardize best practices and train clinicians to deliver high-quality, interdisciplinary palliative care across settings. Leading physician researchers and ethicists have published extensively in peer-reviewed journals, academic texts, and mainstream media.

Despite this robust evidence base, many patients and families still experience end-of-life care as a stark binary: aggressive inpatient interventions on one side, or hospice and “giving up” on the other. Why does this false choice persist?

For me, this question is no longer theoretical. It is deeply personal. As my parents age, I have watched them navigate serious illness, both at home and in the hospital. Again and again, I have seen a system that is reflexively oriented toward intervention — more procedures, more monitoring, and more escalation.

The intent is usually good. But too often the outcome is suffering, including physical discomfort, emotional distress, and a loss of agency at precisely the moment when patients need it most. Where is palliative care in these situations?

End-of-life care should not be an either-or proposition. It should not require patients to choose between life-prolonging treatment that may diminish quality of life or dying at home without support.

Palliative care belongs alongside disease-directed treatment, especially during hospitalizations, where it can provide expert symptom management, clarify goals of care, support families, and guide thoughtful transitions home when appropriate.

I have seen the power of this model first hand. Palliative-focused hospitalizations can be transformative, not only for patients who experience relief from pain and fear, but also for caregivers who gain reassurance, guidance, and partnership. This approach preserves dignity, respects patient values, expands hospital capacity and access, and makes more responsible use of limited healthcare resources. Most importantly, it restores humanity to care.

For me, the conclusion is clear. When possible, our loved ones should not die in hospitals. They also should not have to forgo care, comfort, or hope.

To palliative care clinicians, healthcare leaders, policymakers, advocates, and anyone who has walked this path with someone they love, let us build a healthcare system that truly does everything for the patient, not to the patient. Compassion and evidence are not competing priorities. Together, they form the highest standard of care.

Readers Write: What a Modern Application Managed Services Model Should Deliver

February 23, 2026 Readers Write No Comments

What a Modern Application Managed Services Model Should Deliver
By Scott Gildea

Scott Gildea, MBA is EVP of client delivery for Optimum Healthcare IT.

image

For years, application managed services in healthcare has been treated as a singular staffing solution. When teams were short-handed or roles went unfilled, organizations added overseas resources to keep systems running. That approach worked until the environment changed.

Today’s healthcare landscape is more complex than ever. EHRs, ERPs, and enterprise platforms are deeply connected to patient care, revenue, and operations. Downtime is no longer just an inconvenience, it is a risk. At the same time, IT teams are burned out and being asked to support transformation while maintaining stability.

In this environment, application managed services cannot be about coverage alone. They must deliver accountability, consistency, and operational confidence.

This is the Moment for Application Managed Services

As a whole, healthcare organizations are at a dramatic inflection point in healthcare IT. Some of the biggest reasons for this include:

  • Mounting pressure surrounding increasing costs, stagnant budgets. and fluctuating reimbursement rates.
  • Socioeconomic pressures, such as increasing prices.
  • Downward pressure from health system executives to be more efficient and forward-thinking.

Application managed services must keep pace with the expedited evolution of technology in healthcare. Change is here for most organizations, whether it takes the shape of AI, the mergers and acquisitions, or the increasing socioeconomic pressures. 

Health systems are no longer asking whether they need managed services. They are asking which models will actually support their organizations over the long term. The answer lies in delivery models that are built specifically for healthcare, designed for accountability, and focused on the people who keep these systems running every day.

What a Modern Application Managed Services Model Should Deliver

Health systems are not looking for another vendor. They are looking for a delivery model that they can rely on every day, not just during go-lives or major initiatives. Traditional approaches often fall short.

What organizations need now is a managed services model that is explicitly built for healthcare enterprise applications, operates as a valid extension of the internal team. and has clear ownership and shared accountability.

A modern application managed services solution should answer a few basic questions:

  • Who owns the day-to-day operations?
  • How are issues identified before they become incidents?
  • How is performance measured and improved over time?
  • How does the model scale without disrupting internal teams?
  • Will this allow us to keep up with the ever-changing landscape of health IT, including EHR updates, AI advancements, and more?

When managed services are designed well, they reduce operational noise. Leaders spend less time reacting and more time planning. Internal teams stay focused on strategy and improvement instead of constant firefighting. That does not happen by accident. It requires healthcare-specific experience, disciplined delivery, and a model that is built for complex enterprise environments.

Readers Write: Medicare Goes All In on Value-Based Care

February 16, 2026 Readers Write No Comments

Medicare Goes All In on Value-Based Care
By Eugene Gonsiorek, PhD

Eugene Gonsiorek, PhD is VP of clinical regulatory standards for PointClickCare.

image

If there were any doubts about Medicare’s commitment to value-based care, there shouldn’t be any longer.

Abandoning its former model of rolling out value-based care (VBC) programs one at a time, the Centers for Medicare and Medicaid Services (CMS) between March and December 2025 announced nine new or proposed programs and modifications to five existing programs – an unprecedented pace.

The rush of new programs and the concentrated timing is CMS announcing it is aligning Medicare around VBC to a greater degree than ever before. This is good news for organizations that have been working toward this end and a prompt for those who haven’t made as much progress.

The New Medicare Programs

Let’s take a closer look at the new and proposed programs. 

  • ACCESS (Advancing Chronic Care with Effective, Scalable Solutions). A voluntary, 10-year model testing outcome-aligned payments for measurable clinical improvements using technology-supported care for chronic conditions such as hypertension, diabetes, musculoskeletal pain, and behavioral health.
  • WISeR (Wasteful and Inappropriate Service Reduction). Launched in mid-2025, this model tests ways to reduce unnecessary services and accelerate prior authorization while safeguarding patients and taxpayers against low-value care.
  • GUARD (Global/Universal Accountability in Drug Pricing) and GLOBE (Global Outcomes in Benchmarking and Equity). Proposed mandatory models that aim to test international benchmark-based adjustments to Medicare Part D and Part B drug rebate and pricing systems to help address high drug costs.
  • Ambulatory Specialty Model (ASM). Finalized as a mandatory model beginning in 2027 that holds certain specialists accountable for quality, cost, and care coordination outcomes.
  • LEAD (Long-term Enhanced ACO Design). Announced as the next generation of accountable care organization models, a 10-year design intended to better support small, independent, and rural providers following ACO REACH (Accountable Care Organization Realizing Equity, Access, and Community Health).
  • BALANCE (Better Approaches to Lifestyle & Nutrition). Announced alongside GUARD and GLOBE, this voluntary model is intended to align manufacturers, state Medicaid agencies, and Part D plans to improve metabolic health through GLP-1 access plus lifestyle support, with testing concluded by 2031.

Across these models, several common design features stand out. Time horizons are longer, often extending eight to 10 years. Payment is increasingly tied to measurable outcomes rather than process compliance. Accountability extends beyond primary care into specialty care and pharmaceuticals. In select areas, CMS is requiring mandatory participation to achieve broad system impact.

The ACCESS model illustrates how CMS expectations are evolving. A voluntary 10-year initiative, ACCESS ties payment to demonstrable improvement in chronic conditions such as hypertension, diabetes, musculoskeletal pain, and behavioral health. The focus is no longer service volume or short-term utilization metrics, but sustained clinical outcomes.

Similarly, the WISeR model reframes inappropriate utilization as both a quality failure and a fiscal risk. By targeting low-value services and streamlining prior authorization, WISeR signals CMS’s growing willingness to intervene earlier in care decisions. The goal is not simply to manage spending after it occurs, but to prevent waste before it happens.

Together, these models reflect a clear shift from utilization-based proxies toward explicit accountability for results.

Specialty Care and Pharmaceuticals Move to the Center

Perhaps the clearest departure from earlier value-based care efforts is CMS’s expansion of accountability into specialty care and drug pricing, areas historically insulated from performance-based payment.

The finalized ASM, set to begin in 2027, makes participation mandatory for selected specialists and holds them accountable for quality, total cost of care, and care coordination. This challenges the long-held assumption that VBC is fundamentally a primary care endeavor. It also elevates downstream utilization, including post-acute care, from a secondary concern to a central performance variable.

At the same time, the proposed GUARD and GLOBE models are CMS’s most direct effort to apply value-based principles to pharmaceutical spending. By testing international benchmarking approaches in Medicare Parts B and D, CMS is extending accountability into pricing structures that have traditionally been governed by statute rather than performance expectations.

Long-Term Accountable Care and Prevention as Structural Bets

The LEAD model underscores CMS’s recognition that accountable care requires stability, not churn. By extending participation horizons to 10 years and focusing on small, independent, and rural providers, LEAD acknowledges that organizational transformation and sustained downside risk cannot be achieved on short timelines.

In parallel, the BALANCE model reflects CMS’s growing emphasis on prevention and upstream investment. By aligning manufacturers, state Medicaid agencies, and Part D plans around GLP-1 access combined with lifestyle and nutrition support, BALANCE tests whether earlier intervention in metabolic disease can produce durable improvements in outcomes and spending. By pairing pharmaceutical access with behavioral support, CMS is testing integrated solutions rather than isolated interventions.

The Effects on Patients and Providers

These models collectively raise the bar for providers. Financial accountability is more robust. Timelines are longer. Expectations for care coordination and performance improvement are higher. Independent practices, rural providers, and specialists, groups historically less exposed to mandatory value-based arrangements, are now central to CMS’s policy design.

For patients, CMS’s stated objectives are clear: earlier intervention, fewer unnecessary services, better chronic disease control, and lower drug costs. Whether these outcomes are realized will depend less on policy intent than on execution, particularly provider engagement and the ability to manage care across settings.

From Experimentation to System Design

Taken together, the new model announcements signal that CMS is moving beyond experimentation toward system design. The concentration of releases, the expanded mandatory participation, and the consistent emphasis on outcomes and cost containment all point in the same direction.

CMS is no longer asking whether VBC works. It is redesigning Medicare on the assumption that it must.

As these models move from proposal to implementation, they will shape payment policy, care delivery structures, and provider participation in Medicare well into the next decade. Organizations should prepare themselves for a system in which value-based accountability is no longer optional, but the norm.

Readers Write: Open Access in Healthcare: What TEFCA Got Right, Where It’s Stuck, and What Comes Next

February 16, 2026 Readers Write No Comments

Open Access in Healthcare: What TEFCA Got Right, Where It’s Stuck, and What Comes Next
By Robin Monks

Robin Monks is CTO at Praia Health.

image

If you’ve ever moved to a new city and tried to get your medical records transferred to a new provider, you already understand the problem that the Trusted Exchange Framework and Common Agreement (TEFCA) is trying to solve. In theory, health data should follow you. In practice, it often doesn’t.

TEFCA is the federal government’s most ambitious attempt to date at fixing nationwide health information exchange. Mandated by the 21st Century Cures Act and formally launched in late 2023 when the first Qualified Health Information Networks (QHINs) were designated, TEFCA aims to be the “interstate highway system” for health data, allowing providers, payers, and patients to share information regardless of which network they are on.

After two years of operation, there’s a lot to like about what TEFCA has accomplished. More than 70,000 healthcare locations are now connected through TEFCA, and Epic reported that 1,000 hospital customers have transitioned to TEFCA. Carequality, a framework connecting over 50 networks, 600,000 care providers, and 4,200 hospitals, is actively aligning its policies with TEFCA.

The framework has also expanded beyond its initial treatment-focused exchange. TEFCA now supports data exchange for payment, healthcare operations, government benefits determination, individual access, and public health purposes.

Perhaps most importantly, TEFCA is creating a universal floor for interoperability. Before TEFCA, a health system that wanted to exchange data nationally had to join multiple networks and maintain dozens of point-to-point connections. TEFCA simplifies that into a single participation model. For smaller practices and rural hospitals that couldn’t afford the overhead of managing multiple network memberships, this is a meaningful reduction in cost and complexity.

But TEFCA’s scale means that providers are now responding to queries from organizations they’ve never interacted with before. When a requester says they’re querying for treatment purposes and the responder disagrees that the request qualifies as “treatment” under HIPAA, you get what the ASTP calls an “information exchange impasse.”

This lack of trust means that providers are easily talked into not automatically replying to TEFCA requests, even to an individual access request with a verified identity attached. Information blocking remains a persistent and thorny issue. TEFCA participants who interfere with QHIN choice now risk violating the federal information blocking rule, with potential Medicare payment disincentives, but the cultural shift from “default deny” to “default share” is slow.

Then there’s the FHIR question. TEFCA launched using IHE-based document exchange, a 1990s-era architecture that predates smartphones and modern web standards. This was a pragmatic choice to minimize disruption and build on the existing exchange infrastructure (IHE-based exchange still represents enormous transaction volume annually).

But it means that the initial TEFCA experience is document-centric, returning C-CDA documents rather than discrete, FHIR-based data. The HTI-5 proposed rule from December 2025 signals a strong push toward FHIR-based APIs, but the gap between where TEFCA is today and where modern application developers need it to be is real. Companies that build on FHIR and OIDC are watching this closely.

The regulatory environment is also in flux. That same HTI-5 proposed rule would remove the TEFCA manner exception, a provision that allowed TEFCA participants to limit data exchange to only through TEFCA. The administration is signaling that using information blocking exceptions to incentivize TEFCA participation may be unnecessary, which is an interesting stance that simultaneously shows confidence in TEFCA’s trajectory and a desire to not disadvantage non-TEFCA exchange networks.

TEFCA has achieved enough adoption to be taken seriously, but not enough to be taken for granted. Here’s what needs to happen for it to reach its potential:

  • FHIR needs to be a first-class citizen, not a roadmap item. The healthcare technology ecosystem has moved to FHIR. App developers, patient-facing platforms, and clinical decision support tools all expect FHIR APIs. Until TEFCA’s QHIN-to-QHIN exchange natively supports FHIR alongside IHE, there will be a gap between what TEFCA enables at the network level and what the market needs at the application level.
  • Trust needs to be engineered, not assumed. The interpretive disagreements around treatment definitions and provider qualifications aren’t going to resolve themselves through goodwill alone. TEFCA’s governance needs to produce clear, specific guidance that participating organizations can implement without extensive legal review. The SOP updates from January 2026 are a step in the right direction, but there’s more work to be done.
  • Patient transparency and choice must be central. Individual Access Services (IAS), the mechanism by which patients can access their own data through TEFCA, is likely to be one of the fastest-growing use cases. The patient access market is forecast to reach $4.16 billion by 2032. But IAS also carries the highest risk of information blocking complaints, because patients have the right to choose any IAS provider, regardless of their provider’s QHIN. Making this work requires a level of patient-facing transparency that healthcare hasn’t historically been great at. We also need to expand this to not only reading data, but performing actions with target EHRs.
  • Enforcement has to be real. TEFCA operated for its first year as an entirely voluntary framework. The increasing enforcement posture around information blocking and the integration of TEFCA obligations into Medicare compliance programs is changing the calculus. But voluntary frameworks succeed when the incentives to participate outweigh the friction. Right now, the friction is still high for many organizations, particularly smaller ones. Last year we were promised that we would start seeing strict enforcement on information blocking, but so far we’re not seeing examples of enforcement from CMS.

TEFCA is doing something genuinely important. It is establishing the principle that health data should be exchangeable at a national scale, with a common set of rules, as a baseline expectation rather than a special achievement. For health systems that are thinking about their consumer experience strategy, and all should be, the ability to access data from across a patient’s entire care journey is critical.

The dream of open access in healthcare is within reach, but getting from good-intentioned definitions to it running and working where patients need is slow.

Readers Write: AI in Healthcare Revenue Cycle: Linking Automation to Financial Stability

February 16, 2026 Readers Write No Comments

AI in Healthcare Revenue Cycle: Linking Automation to Financial Stability
By Inger Sivanthi

Inger Sivanthi, MBA is founder and CEO of Droidal.

image

Five or six years ago, revenue cycle performance was discussed mostly in operational terms. Leaders reviewed denial rates, days in accounts receivable, and staffing productivity. If those indicators were steady, the assumption was that the organization was financially sound. The work was seen as administrative execution rather than financial strategy.

That framing feels incomplete now. Reimbursement patterns have become less predictable. Payer interpretations vary, even within the same plan category. Documentation standards evolve quietly, and what cleared last quarter may stall this quarter. Nothing feels catastrophic, yet the margin for error has narrowed.

When timing becomes inconsistent, finance feels it quickly. Forecast models widen. Cash flow conversations become more cautious. Growth initiatives are evaluated with an extra layer of scrutiny. Revenue cycle management is no longer operating in the background. It is influencing financial confidence.

Automation Solved the Obvious Friction

Healthcare organizations did not stand still over the past decade. Eligibility workflows were automated. Coding tools became more sophisticated. Electronic remittance reduced manual posting errors. These investments improved speed and removed visible inefficiencies.

Yet the deeper issue remained. Denials continued for reasons that were not always procedural. Appeals absorbed experienced staff time. Forecasting models leaned on historical trends that assumed payer behavior would remain relatively stable. That assumption is harder to defend today.

Automation follows instructions. It does not interpret shifts. It executes rules consistently, but does not recognize when those rules are interacting differently in a changing environment.

Earlier Pattern Recognition Is Changing the Dynamic

Artificial intelligence brings a different capability. By reviewing documentation details, coding sequences, authorization timing, and payer response history together, it begins to surface combinations that tend to struggle. Those combinations are not always obvious. They emerge through repetition.

When risk is identified before submission, teams can intervene before delay becomes inevitable. Preventing a denial is financially different from correcting one. The time saved compounds quietly. Over several quarters, even modest improvements in first-pass acceptance begin to influence working capital stability.

The benefit is not perfection. Healthcare reimbursement will never be perfectly predictable. The benefit is fewer unexpected swings and tighter confidence intervals around cash timing.

The Small Variations That Shape Margin

Revenue loss is rarely dramatic. It builds slowly. A modifier applied differently between departments. A service level coded conservatively out of habit. Contract language interpreted with slight variation across facilities. Individually, these instances appear manageable. In aggregate, they influence performance more than most teams realize.

AI systems reviewing documentation and billing data together can detect these repeated inconsistencies more consistently than manual review alone. This does not remove the need for experienced revenue leaders. It simply directs their attention toward areas where exposure is concentrated.

That shift in focus strengthens margin discipline without creating additional administrative layers.

From Reporting History to Informing Strategy

Traditional dashboards tell organizations what has already happened. They summarize billed charges, denials, and collections. That information is necessary, but it is reactive by design. By the time a pattern appears clearly in retrospective reporting, the financial impact has already occurred.

Predictive modeling changes that posture. When internal performance data is combined with payer response behavior, reimbursement timing becomes easier to estimate within a reasonable range. Forecasts still require judgment, but the range narrows. Leadership discussions feel less defensive and more deliberate.

Revenue cycle management begins influencing forward planning rather than simply documenting past outcomes.

Operating Within Real Workforce Limits

Revenue cycle staffing remains tight across the industry. Seasoned revenue professionals are hard to come by. Even when you hire, the ramp-up period slows momentum. For many teams, expanding staff just isn’t practical right now.

Intelligent prioritization helps address this reality. When higher-risk claims surface earlier and larger-dollar exposures are flagged sooner, teams allocate effort more intentionally. The objective is not workforce reduction, but resource precision. Protecting margin increasingly depends on where attention is placed, not simply how many people are assigned.

The Shift Has Been Gradual, Not Dramatic

There was no single moment when artificial intelligence transformed revenue operations. The change has been incremental. Organizations recognized that efficiency alone did not insulate them from variability. Earlier visibility, more focused intervention, and steadier forecasting gradually reshaped how revenue risk is managed.

Healthcare reimbursement will continue to evolve, and complexity will remain part of the system. Artificial intelligence does not remove that complexity. It improves how quickly patterns are recognized and how steadily leadership responds. In that sense, revenue cycle management has moved closer to financial strategy, and predictability has become as valuable as productivity.

Readers Write: Virtual Nursing Thrives When Thoughtful Design Guides Implementation

February 9, 2026 Readers Write Comments Off on Readers Write: Virtual Nursing Thrives When Thoughtful Design Guides Implementation

Virtual Nursing Thrives When Thoughtful Design Guides Implementation
By Christine Gall, RN, DrPH

Christine Gall, RN, DrPH, MS is chief nursing officer of Collette Health.

image

Virtual nursing has quickly evolved as a force multiplier that is capable of addressing top pain points that are impacting care delivery, operations, quality, and patient experience. But as more health systems explore this model, outcomes have varied widely. Some organizations report measurable improvements in documentation time, throughput, retention, and workload relief. Others struggle to see benefits or encounter frustration at the bedside.

The difference rarely comes down to technology alone. It comes down to design. Successful virtual nursing programs begin with clear-eyed assessment. What problem are we trying to solve first? Throughput congestion? Night shift support? Documentation burden? The strongest programs anchor the initial design to a significant operational issue that is specific, measurable, achievable, and relevant.

Of equal impact is the identification of a leader and team that are ready for the responsibility of substantial workflow redesign. Virtual nursing models are more likely to succeed and scale when both factors are addressed and when the initial focus is narrow and well defined, setting up an iterative strategy that supports program expansion and scale over time.

Virtual nursing is also capable of delivering powerful, longer-term benefits like improved staff resilience and nurse retention, but those gains require longer timeframes to see improvement. Programs that try to solve multiple issues initially at launch often struggle, while those that sequence thoughtfully and use data-driven rapid cycle improvement to continually monitor success and iterative improvement are better positioned to scale successfully.

When organizations run into difficulty, it generally involves a failure to define attainable goals, a gap in stakeholder perception that creates barriers to acceptance and adoption of new workflows, and/or a failure of the new work processes to address the areas of concern without creating new burdens. In my experience, three design choices consistently determine whether virtual nursing lightens workload or adds friction:

Task Clarity and Workload Optimization

For bedside nurses, the value of virtual nursing is measured in minutes of administrative burden reduced and the expansion of impactful time spent with their patients. Programs succeed when they clearly define which tasks are moving from bedside to virtual roles. That may include time-intensive admission, discharge, and patient education activities, care coordination, and focused clinical oversight. But decisions regarding role and scope of the virtual nurse must be explicit.

When the virtual nurse’s role is not well defined and understood by the entire team, bedside teams experience little relief, and sometimes more duplication. A symptom of poor task clarity is an increase in the need for communication between the virtual and bedside staff. Well-run virtual nursing initiatives build in automated methods of communication directly into the workflows rather than requiring one-off, manual communication activities. Real value comes from task transfer, not task shadowing.

When programs invest in this level of clarity, bedside nurses increasingly recognize the impact, and barriers to adoption are mitigated.

Workflow Integration, Not Overlay

Many early virtual nursing implementations struggled because the virtual workflows were created as parallel processes rather than developing novel, integrated workflows. If virtual nurses document in separate systems, communicate through separate channels, or escalate through ad-hoc pathways, the bedside becomes the bridge between worlds, an experience that likely creates additional burden.

Integration, by contrast, means shared communication pathways, aligned documentation practices, clear escalation rules, and participation in unit workflows rather than operating in parallel but separate processes. When virtual nurses are embedded operationally, lines of workflow delineation are crisp and do not create new burdens for communication, coordination, or clarification.

Shared Governance and Co-Design with the Bedside

Virtual nursing is as much a cultural change as an operational one. How it is introduced matters. When bedside nurses are asked to adopt a model that they did not help shape, skepticism is a rational response. The programs that thrive invest in shared governance, inviting bedside teams into discussions and decisions about workflow redesign, task allocation, communication norms, and measurement. This transparent approach may not only produce more realistic workflows, but can also establish trust between virtual and bedside roles from the start.

Trust and shared responsibility for iteratively creating a robust care delivery model is the foundation for program stability, refinement, and scale. Connecting leaders and teams with the “what” and “why” before defining “how” a virtual care program will evolve is crucial to buy-in, acceptance, adoption, and ultimately ownership of the new processes.

Virtual Nursing as a Near-Term Workforce Solution

Unlike conventional software deployment, the success of virtual nursing cannot be measured by technical readiness alone. Integrations, reliability, and usability matter, but they are only one part of the equation.

Virtual nursing changes how work is distributed, how handoffs occur, and how clinicians collaborate. It is a care model that is built on an agile technology platform, not a rigid technology solution in search of a problem to solve. Successful virtual care models mature through continuous evaluation of outcomes and success metrics, data-driven iteration, and widespread dissemination of shared learnings.

It may be easy to forget that the workflows, staffing models, and best practices we consider routine took years to stabilize. This is an important perspective to remember as virtual nursing practice and integrations evolve. The nursing workforce has carried extraordinary strain for more than a decade, and many traditional solutions focus on long-horizon strategies, such as expanding education pipelines, addressing retention, or modernizing licensure. Those efforts matter, but will also require the full redesign of the model of clinical care delivery to effectively address the looming issues of the day.

Virtual nursing is one of the most promising and actionable models that can reduce burden, increase capacity, and improve care in the near future, provided the foundational elements are fully embraced and executed. If we allow early friction and avoidable barriers to eclipse that potential, we risk discarding an approach that could meaningfully support nurses when eloquent solutions are urgently needed.

The opportunity is not merely to deploy technology, but to build a sustainable clinical workforce that is properly resourced and supported to deliver world-class care and elevate the patient experience of care.

Readers Write: Healthcare Needs a Data Liquidity Disruption

February 9, 2026 Readers Write 4 Comments

Healthcare Needs a Data Liquidity Disruption
By Sriram Devarakonda

Sriram Devarakonda, MSEE is co-founder and CTO of Cardamom.

image

Healthcare has long promised that data would transform research, precision medicine, and patient outcomes. Yet progress remains painfully slow. Data silos and fear-driven restrictions keep critical information trapped in systems that were designed more to contain than to share.

Real transformation in targeted care, population health, and clinical research won’t come from yet another interoperability initiative or API. It requires a more fundamental shift: a data liquidity disruption that treats data as something meant to move, not sit still.

What’s holding healthcare back?

Healthcare’s challenges have evolved dramatically over the past three decades, and they will continue to change just as profoundly in the decade ahead. Thirty years ago, the priority was basic connectivity: enabling continuity of care across disparate systems through point-to-point integrations, with HL7 playing a foundational role.

Ten years ago, the rise of web and mobile technologies demanded a modernized approach to interoperability, giving rise to newer API-based standards, such as FHIR, that enabled digital health innovation.

Today, and looking forward, the focus has shifted yet again. Healthcare’s most pressing challenges, from cancer to diabetes to Alzheimer’s, require the effective use of data and AI at scale, challenges that impact millions of lives and drive national healthcare costs. Solving them demands more than messaging standards alone. Our future cannot depend on HL7 and FHIR by themselves. It requires true data liquidity, real-time intelligence, and platforms that are designed for learning health systems.

Before we delve into how we prepare for the future, we should look at a few reasons that data liquidity is a challenge today.

  • Proprietary mindsets. Healthcare systems and vendors have long viewed data as an asset to guard, not a resource to share. Competitive, contractual, and legal anxieties create barriers that go beyond technology. They are cultural and structural.
  • Fragmented data standards. Despite progress with HL7 and FHIR frameworks, true standardization remains elusive. Data formats, definitions, and governance models still vary widely, making even “standard” exchanges complex and time-consuming to implement.
  • Privacy and compliance fears. With HIPAA, GDPR, and a growing patchwork of state regulations, organizations often err on the side of caution. The result is a compliance-first posture that, while understandable, often stifles innovation and progress.
  • Legacy infrastructure. Many health systems are still operating on decades-old IT foundations that were designed for billing and clinical care, not for modern data exchange. Retrofitting these systems to support real-time data liquidity is costly and complex.
  • Sheer complexity of technologies. A large barrier to progress is the sheer number of different technology systems even within the same ecosystem. EHRs, ERPs, and countless vendor-managed applications add an additional layer of complexity that’s challenging to overcome.

Why a disruption is inevitable and necessary

Healthcare’s approach to data is slowing progress. Patients want connected experiences, researchers need faster access to data, and providers and payers are under pressure to deliver better outcomes.

Other industries already allow data to flow securely in real time, enabling smarter decisions and personalization. Healthcare must make the same shift, from owning data to stewarding it, and from locking it away to sharing it responsibly. Those who adapt will lead; those who don’t will fall behind.

Preparing for the data liquidity era

How can healthcare organizations prepare for the inevitable disruption?

  • Invest in platforms, not point solutions. Healthcare systems must invest in modular, cloud-based platforms that allow for data to move freely and securely. That means creating enterprise-shared data access on modern data platforms that can evolve alongside transactional systems that are not frozen in time.
  • Embrace interoperability as a strategy, not a checkbox. Compliance-driven interoperability creates connections, not capability. Treating data sharing as a strategic asset is what turns exchange into impact, fueling innovation, partnerships, and better care coordination.
  • Move from data control to data accountability. As data moves more freely, data maturity becomes even more critical. Clear standards for data quality, consent, and usage help ensure that liquidity doesn’t come at the expense of privacy or ethics. AI has a large role to play here when it comes to interpretation and standardization.
  • Standardize clinical workflows. The more healthcare organizations can standardize their clinical workflows and protocols now, the fewer challenges they will have later. Clear, consistent processes make it easier to adopt new tools, train staff, and share data safely.
  • Align data strategy to business and clinical outcomes. Data liquidity drives real, downstream impact on both business and clinical outcomes. When tied to clear, measurable goals, such as reducing denials, accelerating clinical trial enrollment, or improving patient throughput, it becomes a powerful, provable source of ROI.
  • Reimagine the patient’s role. Patients are no longer passive data points; they are active and willing participants. Giving them control over their health data and the ability to share it across providers, researchers, and care teams will accelerate innovation while fostering transparency, trust, and improved outcomes.

The ripple effects of data liquidity

When healthcare achieves true data liquidity, the impact will be profound. Researchers will be able to identify patterns across populations in days, not years. Providers can make more informed decisions at the point of care. Health systems will predict and prevent crises before they occur. Most importantly, patients will benefit from a system that understands them as whole individuals, not just episodes of care that are scattered across disconnected databases.

Healthcare is long overdue for the same data transformation other industries have already embraced, one that allows data to move freely, connect seamlessly, and create value wherever it goes.

The road to disruption won’t be easy, but it is necessary. The barriers to data movement have been standing for too long and the cost of inaction is too high.

Readers Write: Why Patient Wait Times Still Define the Clinic Experience in 2026

February 2, 2026 Readers Write 1 Comment

Why Patient Wait Times Still Define the Clinic Experience in 2026
By Inger Sivanthi

Inger Sivanthi, MBA is CEO at Droidal.

image

Outpatient clinics in 2026 look different from those of a decade ago. Scheduling is online. Records are electronic. Patient portals are standard. Most organizations have already spent the money that was required to modernize access.

Long patient wait times have not disappeared. Waiting rooms still fill early. Appointment times slip before the morning is half over. Front desk staff often begin the day responding to issues rather than managing a steady flow. This happens even when staffing levels are reasonable and schedules appear balanced.

When delays show up this early, technology is rarely the cause. The problem usually lies in how the day begins.

Discussions about wait times often focus on staffing gaps, provider availability, or late arrivals. Those explanations only go so far. In many clinics, the bigger issue is incomplete preparation that spills into the first hours of the day.

Much of the information required for a visit is not fully settled when patients arrive. Demographic details are outdated. Insurance coverage has changed. Required documentation is often left unresolved. The issues show up at the front desk, not in reports.

The front desk absorbs the impact of this unfinished work. Questions that should have been resolved earlier get handled under time pressure. Small corrections stack up. By mid-morning, the schedule is already off course.

Digital intake has reduced paperwork, but it has not changed the timing of the work. Patients may submit forms ahead of time, yet staff still need to review, verify, and correct information close to arrival. Insurance questions require follow-up. Consents must be confirmed. Records must align before a visit can proceed smoothly.

Attempts to improve wait times often focus on making check-in faster. More kiosks are installed. Workflows are tightened. Tasks are automated where possible. These steps improve efficiency, but the constraint remains. As long as preparation is concentrated at the start of the visit, the front desk stays under pressure.

Some organizations now treat intake as work that should be largely completed before the patient enters the clinic. When information is settled earlier, the start of the day becomes more stable and less reactive.

To help with earlier preparation, some clinics use pre-visit review tools that scan intake information before the appointment. Missing data, coverage discrepancies, and unresolved items are flagged while staff still have time to respond. Problems that would otherwise surface at the front desk are handled earlier, when schedules are not yet under strain.

These systems do not replace staff judgment. They point attention to likely trouble spots so issues can be resolved before patient flow is affected. Moving this work earlier reduces the amount of recovery required once the clinic is busy.

Check-in becomes steadier. Front desk staff spend less time resolving avoidable issues. Schedules hold closer to plan across the morning. Patients spend less time waiting because fewer problems reach the front of the workflow.

There is concern that completing intake earlier removes personal interaction. Staff often report the opposite. When documentation and coverage issues are addressed ahead of time, conversations at check-in are calmer and less rushed. Visits begin with clearer expectations.

Patient wait times persist in 2026 because too much essential work still occurs at the moment of arrival. Clinics that complete preparation earlier and use pre-visit review selectively tend to operate with greater stability. The difference shows up in a day that runs closer to plan.

Readers Write: Killing the Clipboard: Cloud Fax is the Bridge to Patient-Centric Data Access

January 28, 2026 Readers Write Comments Off on Readers Write: Killing the Clipboard: Cloud Fax is the Bridge to Patient-Centric Data Access

Killing the Clipboard: Cloud Fax is the Bridge to Patient-Centric Data Access
By Bevey Miner

Bevey Miner is a healthcare strategist at eFax, a Consensus Cloud Solutions brand.

image

The Trump Administration’s renewed focus on interoperability has reignited the long-standing calls for healthcare to “Kill the Clipboard.” This movement aims to eliminate the administrative burden and data silos that are caused by paper-based processes, allowing for near-instant access to searchable, actionable patient information.

The industry broadly supports modernization efforts, with patient access at the forefront. But we need to ensure that this digital transformation doesn’t leave small, rural, and under-resourced communities behind.

The paper problem: why change takes time

We cannot wait for every provider to achieve a perfect, fully digital state before we start delivering on the promise of interoperability. Patients must have access to their data now, even if parts of the industry are still using clipboards and paper fax.

With the federal initiative to bolster near-instant patient access to their health records, along with real-time patient data accessible for providers to dramatically speed care coordination, paper records that are transmitted over outdated fax machines don’t support and often impede the ability to reach this goal. The administration is leaning heavily on data networks and vendors to streamline the transmission of information between healthcare providers while modernizing standards with FHIR APIs.

Conceptually, the future we are all working towards is faster data access, searchable and actionable information to improve care, and seamless communication between care teams. This idealized future state fails to account for the practical limitations that are facing many foundational healthcare organizations. 

Twenty-nine percent of providers report that they lack the financial resources that are needed to deploy the advanced digital infrastructures that are required by today’s interoperability vision.

Many organizations, like rural and smaller post-acute care settings, are still playing catch-up since they were excluded from incentives that accompanied the HITECH act of 2009. While some of these organizations may have an EHR, it may be outdated and not certified. Additionally, it’s not uncommon to find others working with scrappier, home-grown solutions, or even resorting to paper-based and manual processes.

But while these smaller organizations might not have million-dollar EHR platforms, they do have paper fax. In order for healthcare organizations of all sizes to participate in the move to “Kill the Clipboard,” they are turning to digital cloud fax.

Cloud fax: healthcare’s guilty pleasure

A recent survey found that 46% of healthcare facilities still use paper fax to send and receive patient data. If the healthcare industry is so dedicated to moving past paper, why do these archaic systems persist?

The simple answer is that, while we are attempting to replace the paper fax machine with a structured data format like FHIR, we still need the next level of communication maturity: cloud fax. Once a fax becomes digital, additional data-sharing capabilities become possible. 

Cloud fax offers all the benefits of paper fax and is much more efficient. It is particularly easy to use and can be fully integrated into other applications via APIs. For decades, it has served as the standard method for document and digital data transmission in healthcare because it checks many boxes. It meets HIPAA and HITRUST standards and is universally compatible with other systems that operate in silos.

Simply put, cloud fax is the most common and accessible form of send and receive communication in our industry. Calls to prevent its ubiquitous use demonstrate a fundamental unawareness of current operational realities and the power of digital transformation to modernize and integrate cloud fax, rather than simply eliminate it.

Send, receive, find: AI-powered digital cloud fax goes the extra mile

Digital cloud fax provides robust send and receive capabilities, but to meet the CMS definition of interoperability, “find” is another key component. To find information, the data must be discoverable. New AI capabilities are helping fax go the extra mile, transforming traditionally unstructured, static documents into structured, actionable insights using intelligent data extraction. This is critical to advancing interoperability since as much as 80% of healthcare data remains unstructured.

Innovations in machine learning and LLMs enable unstructured data from digital faxes, scanned images, TIFFs, and other PDFs to be extracted from nearly any type of health document, including intake content, claims, handwritten forms, and more, and place it directly into a structured system like an EHR or a payer workflow. When these AI tools are built on digital cloud fax platforms to start, they are already leveraging a technology that most healthcare organizations have in place. Implementation is significantly easier and less time-consuming than adding an entirely new system to an organization’s already overloaded and fragmented tech stack.

Delivering superior reliability and security, intelligent digital cloud fax acts as a connector between various types of data files and formats, sharing both structured and unstructured data between healthcare organizations that span various levels of digital sophistication.

Time to face the fax

For many healthcare organizations, digital cloud fax isn’t a roadblock, but an accelerator, enabling them to keep up with more tech-savvy counterparts without the heavy investment in rip and replace technology. It also supports the ongoing FHIR mandates and regulatory changes impacting providers at every level.

By recognizing digital cloud fax as a necessary part of day-to-day operations, as it is at most healthcare organizations, we can better understand how this tool can help us reach interoperability faster, while facilitating the digital transformation of as many organizations as possible.

Healthcare’s reliance on digital cloud fax should not be treated as a guilty secret. Instead, it’s an equalizer and an opportunity. Once we realize its full potential, interoperability initiatives will be more achievable and successful than ever.

Readers Write: Engineering Prior Authorization for WISeR: Six Ways Providers Can Prepare for AI-Assisted Prior Authorization Under the WISeR Model

January 26, 2026 Readers Write Comments Off on Readers Write: Engineering Prior Authorization for WISeR: Six Ways Providers Can Prepare for AI-Assisted Prior Authorization Under the WISeR Model

Engineering Prior Authorization for WISeR: Six Ways Providers Can Prepare for AI-Assisted Prior Authorization Under the WISeR Model
By Ryan Redman, JD

Ryan Redman, JD is product manager at Onspring.

image

The Wasteful and Inappropriate Service Reduction (WISeR) model introduces AI-assisted reviews into Medicare Fee-for-Service (FFS) prior authorization across six pilot states is now live, as of January 2026. That may expedite cost control, but it also raises high-stakes governance questions that are already being discussed in public debate.

Some critics have warned of an “AI death panel” dynamic in payer decisions, a fear that is now echoing into Medicare’s orbit as automation expands. For providers participating in original Medicare, the operating problem changes. Decisions must be made quickly, consistently, and defensibly, with evidence trails that withstand audits and appeals.

While the program is framed around reducing waste, it creates immediate governance, risk, and compliance challenges for providers who are deciding whether and how to submit services through the WISeR prior authorization pathway.

What changes most under WISeR is not clinical care, but the expectation that decisions are traceable, reviewable, and defensible as they move through provider ordering, scheduling, and revenue cycle workflows and into AI-assisted review on the payer side.

How should providers respond? The focus should be on preparing ordering, intake, and revenue cycle workflows first, then tuning for throughput.

Where the friction really is for providers

Before designing solutions, providers must understand where WISeR introduces operational and governance risk into existing workflows. Providers will still deliver care and submit claims, but WISeR introduces new intermediaries, AI technology vendors, between the provider and the Medicare Administrative Contractor.

With tech vendors now in the mix, incentives to curb waste cannot influence clinical judgment. Provider documentation and workflow controls must support medical necessity without introducing financial bias into clinical decision-making.

Teams will have to juggle prior authorization and pre-payment reviews. If a provider chooses not to submit a required prior authorization, the claim will be scrutinized pre-payment, delaying reimbursement by 45 days or more and potentially affecting cash flow. If prior authorization is skipped, post-service reviews can stall cash and increase appeals, so routing, timers, and evidence capture must be precise.

The baseline requirement: transparency is non-negotiable. Prior authorization status, approval and denial patterns, turnaround times, and appeals must be visible across provider clinical, scheduling, and revenue cycle teams, not in stitched spreadsheets, with human review and audit trails for any AI-assisted step.

Build a WISeR-ready architecture

With the friction points defined, the build becomes clearer. From a provider perspective, a WISeR-capable pipeline consists of six moving parts that function as a single system and support governance, risk monitoring, and compliance reporting.

  1. Data discipline at intake. Ensure that your intake teams or software are capturing the specific clinical evidence that is required for WISeR codes before the order is signed. Don’t let the order proceed without the “evidence packet” attached. For providers, this starts with ensuring that required clinical documentation is captured at the point of order for WISeR-targeted services.
  2. Pre-submission logic checks. Configure clearinghouse or revenue cycle management (RCM) practices to check claims before submission. If an issue arises, stop the claim internally before the AI vendor sees it.
  3. Clinical review queue (human in the loop). For providers, this includes ensuring that claims do not drop until a prior authorization number is on file. Use selectable reason codes for consistent reporting and notices. Human oversight remains a documented control, not an informal checkpoint.
  4. Evidence and disclosure bundles. Automatically generate a complete packet for each determination: inputs, rationale, attachments, timestamps, communications, and notices aligned to reason codes.
  5. Appeals and learning loop. Segregate appeals (different reviewers, fresh rationale). Track overturns and feed them into rule refinement, reviewer coaching, and documentation retraining where gaps are identified.
  6. Observability in the system of record. Instrument the same system that makes decisions: latency distributions, approval to denial ratios, appeal rates and outcomes, reviewer variance, and any AI usage or overrides. Providers should monitor denial trends closely to identify whether specific diagnosis codes or documentation patterns are triggering automated review.

Controls that make speed defensible

Role-based access should determine who can view PHI, who can finalize a determination, and who can modify provider-controlled workflow rules and documentation requirements. When those rules or configurations change, record who reviewed them and maintain a versioned history of the changes. Logs should be append-only and time-stamped, with retention aligned to records schedules. Controls should also prevent WISeR-targeted claims from being submitted without a prior authorization number on file.

Because AI-supported reviews occur on the WISeR technical vendor side, providers are not tuning models, but monitoring outcomes. Pattern and variance checks should run continuously, monitoring approval and denial rates by category and population slices, tracking overturns on appeal, and flagging outliers for the governance group. Provider compliance, legal, security, and operations teams should review findings together to protect both reimbursement and regulatory posture.

Proving it with metrics and turning plans into operations

Where providers use AI internally, such as limited adoption of AI-enabled claims review or denial prediction, those tools should be governed as part of existing clinical and revenue cycle controls rather than treated as core to the WISeR model itself.

Treat WISeR as an engineering problem: set up the core path, prove it on one service line, and then extend it with guardrails. Four phases keep providers moving without losing control.

  • Phase 1: foundation. Intake queues, evidence and disclosure bundles, and tamper-evident logs. Run one high-volume service line end to end. Ensure schedulers do not book WISeR-targeted procedures for original Medicare patients without a prior authorization number on file.
  • Phase 2: pilot and prove. Add audited versioning for rules and, where used by a limited set of providers, any AI-enabled claims review configurations. Require documented clinician sign-off for adverse determinations and keep clinical review independent from financial reporting in access controls and logs. Validate that claims for targeted codes cannot drop without prior authorization.
  • Phase 3: find gaps and retrain. Use denial and pre-payment review data to retrain physicians when documentation gaps emerge.
  • Phase 4: institutionalize and monitor. Run a standing governance cadence (compliance, legal, security, operations, clinical). Track a small, trusted set of metrics: time to decision (median and tail), backlog age, first-pass yield, appeal and overturn rates, reviewer variance, and cash flow impact from pre-payment review delays.

WISeR raises the bar on speed, transparency, and defensibility. For providers, success depends on preparing workflows and documentation before claims are submitted. Done well, this approach protects reimbursement, limits disruption, and may support future eligibility for CMS “Gold Card” exemptions as performance is evaluated during the pilot, ensuring that provider organizations can participate in WISeR without unnecessary risk. Getting data, documentation, and workflows right now puts providers in a position to earn flexibility later.

Readers Write: Early Warning System: How AI-Driven Near Miss Reporting Can Improve Patient Safety

January 19, 2026 Readers Write Comments Off on Readers Write: Early Warning System: How AI-Driven Near Miss Reporting Can Improve Patient Safety

Early Warning System: How AI-Driven Near Miss Reporting Can Improve Patient Safety
By Tim McDonald, MD, JD

Tim McDonald, MD, JD is chief patient safety and risk officer for RLDatix.

image

A nurse prepares to administer a medication to a patient, notices that it is the wrong medication, and corrects the order. A surgical assistant sees that a patient has been prepped for surgery on the wrong limb and corrects the error. A patient on a liquid diet receives a meal with solid food, but a vigilant nurse notices the mistake and substitutes an appropriate meal.

In hospitals and other healthcare facilities, near miss incidents are commonplace. Robust care protocols and training of clinicians, nurses, and other staff go a long way to reducing incidents and preventing patient harm.

But for a variety of reasons, near misses are underreported across healthcare, representing a multitude of lost opportunities.

The importance of understanding how many near misses occur

The World Health Organization defines a near miss as “an error that has the potential to cause an adverse event (patient harm), but fails to do so because of chance or because it is intercepted.”

Healthcare leaders recognize that a certain number of preventable errors are inevitable. Healthcare delivery is complex, emergency rooms are overcrowded, and staff who are dealing with higher patient volumes are understandably prone to error due to fatigue or burnout.

Hospital leaders want to take measures to reduce the number of preventable harm events and have an opportunity to use near misses as a way to prevent them from escalating into serious incidents. That said, having a large number of near miss reports can be beneficial to a hospital as it indicates that a strong safety culture exists and provides valuable learning opportunities for leadership. Hospitals that effectively encourage robust near miss reporting are better positioned to identify and solve problems before they lead to patient harm.

Heinrich’s safety triangle theory holds that 300 near misses occur for every severe accident that involves a serious injury or fatality. Once hospital leaders have a good idea of how many near misses are occurring, they can use AI tools to analyze their near miss data and predict their risk for more serious adverse events. But the real challenge is getting an accurate near miss number.

Most hospitals have voluntary event reporting systems that include reporting of near miss incidents. But the fact that they are voluntary means they likely underestimate the actual number of near misses occurring. A nurse who notices a patient recovering from surgery walking the hallways without non-slip socks may not report the incident for fear of blame or any consequences of reporting. They also may not report a near miss because they believe the event not to be severe enough to warrant it.

One of the biggest reasons for the underreporting of near-misses is that clinical staff lack the time to log an incident report. For many hospitals, event reporting is manual and time-consuming, often taking around 10 minutes per report. Unless healthcare leaders take steps to simplify and streamline incident reporting, including leveraging AI tools to significantly reduce reporting time, they will lack real visibility into how many near misses are occurring and fail to fully understand the threats to patient safety.

Automating event reporting with AI

Advancements in generative AI and large language models (LLMs) offer the opportunity for hospitals to not only improve the accuracy of near miss reporting, but reduce the amount of time needed to log a report. These reporting efficiencies give back valuable time to clinical staff to care for patients. LLMs can process unstructured data, such as text, audio, and video transcripts, and understand the context, which makes it possible to extract and organize insights for a report.

For busy clinical staff, using an AI tool to accurately create an incident report, rather than filling out a report manually, could save considerable time.

As an example, say a nurse realizes that a patient with a penicillin allergy has been prescribed amoxicillin. The nurse prevents the dose from being given to the patient and requests an alternative prescription, preventing harm to the patient. The nurse takes a few minutes to make a verbal report using an AI-based event reporting tool, and moves on to their next patient. From the nurse’s voice notes, the event reporting tool generates a complete incident report, giving hospital leaders valuable insights about what happened.

Leaders can use machine learning tools to analyze near miss reports over time and detect patterns and trends, as well as anticipate risks, in order to be able to prevent harm before it happens.

Automating incident reporting, including near misses, helps reduce barriers to reporting and gives clinical staff a more active role in reducing harm system wide. 

Better tracking of near-misses can serve as an early warning system

In a way, near miss incidents can indicate the diligence of clinical staff. An attentive nurse who notices an unsecured electrical cord and prevents a patient from tripping is obviously well trained.

Improved near miss reporting creates opportunities to improve processes and protocols, such as improved medication safety protocols, fall prevention measures, emergency department redesign, or training on safe injection methods.

When they are well understood and documented, near misses can act as an early warning system. When hospital leaders have a complete picture of incidents where a patient could have been harmed but wasn’t, only because of the timely intervention of a staff member or just plain luck, they can predict their risk of serious adverse events. They can understand their vulnerabilities and take corrective actions that prevent future incidents of harm.

Hospital leaders shouldn’t leave the future of patient safety to chance. Generative AI tools offer the opportunity for clinical staff to file incident reports seamlessly within their daily workflow, increasing the number of near miss reports received while decreasing the administrative burden that leads to clinician burnout and fatigue. AI and data analytics solutions give hospital leaders the ability to analyze trends over time and gain insights into how many near misses are actually occurring.

With effective use of AI-based tools, staff collaboration, and data-informed decision making, hospital leaders can raise standards of care and safety, reduce risk, and improve outcomes for all.

Readers Write: The Operational Divide in Healthcare: Epic-First Health Systems Versus Real-Time Health Systems

January 12, 2026 Readers Write 4 Comments

The Operational Divide in Healthcare: Epic-First Health Systems Versus Real-Time Health Systems
By Buzz Stewart, PhD, MPH

Walter “Buzz” Stewart, PhD, MPH is CEO of Medcurio.

image

An ongoing split is forming across US healthcare, a divide that health system leaders are driving overtly or by default.

On one side are the organizations building real-time reflexes into their operations. On the other are the organizations whose pace is still dictated by vendor-defined data access paths, delayed data, and workflows that are constrained by the vendor architecture.

This divide isn’t philosophical. It is operational. And it is widening fast. This will be the competitive divide for the next decade.

Two Emerging Camps

Markets don’t stall because of a single vendor. They stall when incumbents limit the freedom for customers to move faster, choose better, and innovate on top of their own data. As modernization accelerates, health systems are sorting into two identifiable groups:

Real-Time Health Systems

These organizations are developing the ability to govern their own data access, sense operational signals as they occur, and route actions immediately. They are beginning to build reflex loops, which are lightweight, programmable logic that prevents revenue loss (fewer denials, reduced LOS), mitigates safety drift, reduces manual intervention, and stabilizes workflows before problems compound. They seek destiny control and predictable value creation.

These organizations lean toward independence in how they access and use their own data, and they treat delay as a form of waste rather than an unavoidable byproduct of enterprise IT.

Epic-First Health Systems

These organizations face the same challenges as real-time health systems, but move at the speed of vendor-mediated access. They depend on (costly) sanctioned interfaces, roadmap timelines, batch extracts, and manual processes to identify operational issues. Limited tooling to say the least.

These organizations treat delays as an avoidable byproduct of enterprise IT and accumulating operational drag is their norm

Why the Divide Is Forming

Four forces are driving the move to real-time health systems faster than the industry expected:

  • Labor costs in healthcare have risen faster than inflation for five decades, while inflation-adjusted revenue per encounter has steadily declined as commercial mix shrinks. There is no way out from under the current operating model, and no real way to differentiate in most markets if you keep playing the old game.
  • Operational latency is a margin killer. Discharge delays, denials identified too late, referrals never acknowledged, eligibility errors discovered only after work is performed. Growth in small lags produces large financial consequences.
  • Vendor-controlled access is mismatched to modern workflow demands. Today’s problems require continuous monitoring, immediate detection, and on-demand logic. Architecture designed for retrospective insight isn’t built for real-time operations. HL7/X12 alone doesn’t cut it, and FHIR resources and vendor-gated APIs are imprecise and overly narrow.
  • AI and automation cannot run on delayed signals. The industry is extremely optimistic about automation, but models and agents (and the workflows health systems are pointing them toward) are useless without upstream real-time detection. If an organization only learns that a problem occurred after the fact, no amount of workflow redesign can compensate.

These forces have shifted the strategic question from “What technology do we need?” to “How fast can we recognize and act on our own operational signals?” as the foundation for automation and innovation capabilities.

The Hidden Cost of Delay (Waiting is a Cost Center)

  • Throughput slowdowns that no one sees until the backlog materializes.
  • Denials that could have been prevented if noticed earlier.
  • Eligibility mismatches found only in downstream billing.
  • Referral leakage due to missed handoffs.
  • Safety triggers that surface only when reports are pulled.

Every service unit has its list, but they look remarkably similar across health systems.

While these issues rarely appear as technology failures, they often show up as operational realities. Every one of these problems is a real-time problem trapped in a legacy data access model. The cost of delay is not just inefficiency, but also lost margin, avoidable friction, patient harm, and workforce strain.

What Real-Time Reflexes Look Like

Organizations that operate in real time do not wait for dashboards to tell them what happened. They program their systems to notice and act on what matters in real-time:

  • Detecting a mismatch the moment it occurs.
  • Automatically triggering a task or action
  • Routing information directly to the workflow that requires it.
  • Logging the event without human intervention.
  • Measuring impact within hours, not quarters.

Acting and adapting fast, which few systems do well today, is a strategic market differentiator and quickly becoming a survival imperative as this divide widens. This is the identity high-performing systems realize they must rise to.

Claiming Control of Your Own Data

The executive unlock is straightforward.

  • Your vendor has an obligation to allow access to your data however you choose.
  • Your vendor has a legal duty not to interfere with your use of your data.
  • Acting on your rights does not mean being in conflict with your vendor.
  • Sovereignty is not about choosing one technology path over another. It is about ensuring that the parts of the health system that depend on real-time signals (care transitions, revenue cycle, safety, operations) are not forced into delay by design.

Crossing the Divide: A Simple Playbook

Health systems don’t need multi-year digital transformation programs to build real-time reflexes. They need clarity and sequence.

  1. Map your highest-delay workflows. Where do teams wish they had real-time visibility but are stuck with overnight insight?
  2. Evaluate control. What should be legitimately controlled by the vendor versus what should be governed by the health system. This is almost always the inflection point.
  3. Test one workflow in real time. Pick one workflow and simply measure what happens when teams get the signal immediately instead of a day later. No committees or giant work plan, just a clean before and after.
  4. Scale reflex logic across additional domains. Once a health system sees its first real-time win, the pattern becomes contagious.

A Narrow Window

Every health system will be forced to modernize its reflexes. The question is timing.

Organizations that move now will define the performance frontier and expand markets. Those that wait to modernize will fall further behind.

Readers Write: The Healthcare Cybersecurity Landscape For 2026

January 7, 2026 Readers Write Comments Off on Readers Write: The Healthcare Cybersecurity Landscape For 2026

The Healthcare Cybersecurity Landscape For 2026
By Russell Teague

Russell Teague is chief information security officer of Fortified Health Security.

image

Healthcare is entering the new year facing the same uncomfortable truth it has confronted for more than a decade: no industry faces a higher financial or operational burden from cyber incidents. Even as technology advances and awareness grows, the cost of a healthcare data breach remains the highest of any sector, and the implications are becoming more severe for patient care, financial performance, and organizational resilience.

The latest data confirms what many leaders already feel day-to-day: cybersecurity is no longer just an IT issue or a compliance checkbox. It is a top-line financial risk, a bottom-line operational disruptor, and one of the most material threats to patient safety.

Healthcare Once Again Leads All Industries in Breach Cost

Healthcare continues its longstanding position as the most expensive industry for data breaches. In 2025, the average cost of a healthcare breach reached $7.42 million, marking the 14th consecutive year that healthcare ranked #1 among all industries. While this represents a decrease from $10.1 million in 2024, the reduction does not signify improved risk posture across the sector. Instead, the decline reflects a combination of factors:

  • Evolving incident reporting methodologies.
  • The normalization of ransomware payments.
  • Increased reliance on third-party negotiations.
  • More sophisticated data-exfiltration containment practices.

But the underlying risk drivers – legacy environments, fragmented vendor ecosystems, thinly stretched workforce capacity, and the growing attack surface from digital transformation — remain unchanged.

The $7.42 million average still places healthcare well above all other highly regulated sectors, and it reflects only direct, measurable costs. The true financial impact is often far greater once organizations consider indirect operational and reputational fallout.

Breach Frequency and Threat Pressure Are Accelerating

The cost of individual breaches is only part of the story. Frequency is rising across the sector, expanding total exposure for hospitals, health systems, and clinical organizations. In 2025, healthcare experienced one of the highest incident rates of any industry, driven by persistent ransomware campaigns, increasingly complex third-party and supply chain intrusions, targeted email compromises involving PHI, and exploit attempts against aging clinical systems and medical devices. The growing automation of attacker workflows that are powered by AI has only accelerated this trend.

Attackers view healthcare as a high-pressure, high-reward environment. The combination of operational urgency, patient safety implications, and deeply interconnected technology ecosystems makes the sector uniquely attractive. Historically, healthcare organizations have been among the fastest to pay and the most vulnerable to disruption, further incentivizing attackers.

As breach frequency rises, so does cumulative financial exposure. Even organizations that avoid large-scale incidents still absorb escalating costs tied to smaller breaches, investigative work, vendor assessments, rising insurance premiums, and heightened regulatory scrutiny.

The Operational Fallout: Downtime as a Major Financial Driver

One of the most significant, and often underreported, costs of a cyber incident is operational downtime. In 2025, hospitals experienced an average of 19 to 23 days of disruption following major cyber events, affecting everything from EHR access to imaging, lab systems, surgical schedules, and emergency department operations. These outages frequently force diversion events, delay procedures, and push frontline staff into manual workflows that dramatically slow care delivery.

The financial impact is substantial. Organizations lose millions in net patient revenue as billing cycles stall, coding backlogs grow, and clinical productivity drops. Delayed reimbursement and extended recovery periods often compound these losses. At the same time, hospitals face increased overtime expenses, temporary labor costs, and rising patient dissatisfaction, all of which further erode operating margins. For rural and independent facilities with limited redundancies or tighter financial constraints, the impact can be especially severe.

Operational downtime also creates long-tail effects that extend well beyond the initial incident. Staff burnout rises as clinical teams struggle through prolonged manual processes, turnover risk increases, and organizations become more susceptible to future attacks during recovery periods. In many cases, the cumulative operational and financial damage eclipses the cost of the breach itself.

Why the Breach Lifecycle Matters: 280 Days of Exposure

A defining characteristic of healthcare is how long breaches persist before being identified and contained. Last year, healthcare averaged a 280-day breach lifecycle, exceeding the global average of 241 days. On average, it took 207 days to identify a breach and another 73 days to contain it.

This extended lifecycle dramatically elevates financial exposure. Lengthy dwell time gives attackers ample opportunity to move laterally, access more systems, compromise clinical applications, and exfiltrate sensitive data.

Prolonged exposure usually reflects deeper, systemic challenges across health systems, such as poorly tuned tools, redundant or overlapping technologies, gaps in visibility across environments, inconsistent processes or response playbooks, staffing shortages that drive alert fatigue, and weak segmentation that enables lateral movement. Many organizations also struggle with incomplete logging or monitoring coverage, which further delays containment.

Shortening the lifecycle is one of the most effective ways to reduce breach costs, often by millions. Health systems that detect and contain incidents faster consistently demonstrate stronger program maturity, more rationalized technology stacks, and clearer operational processes aligned to rapid response.

Cyber Insurance Costs Are Rising — for Both Coverage and Claims

In 2025, cyber insurance premiums for healthcare continued to increase, driven by a combination of higher claim severity, rising incident frequency, expanding legal and regulatory exposure, and the growing complexity of medical devices, cloud services, and interconnected vendor environments. Many recent breaches tied to third-party partners have created additional uncertainty for insurers, especially when accountability is difficult to determine.

As a result, carriers are tightening underwriting standards. Organizations now face stricter requirements around MFA enforcement, patching cadence, SOC maturity, third-party oversight, log retention, and evidence of incident response readiness that includes documented plans and playbooks. Those unable to demonstrate adequate maturity are experiencing significantly higher premiums, reduced coverage limits, or, in some cases, losing eligibility for coverage altogether.

The Hidden Costs: Reputation, Trust, and Long-Term Clinical Impact

Beyond direct financial losses, breaches create a secondary wave of disruption that can last months or even years. Organizations often experience a decline in patient trust, heightened scrutiny from regulators and auditors, and increased turnover among clinical, operational, and executive staff. Many also find themselves at a disadvantage when pursuing new strategic partnerships as potential collaborators question their security posture.

These incidents can also drive up ndor-related costs as partners impose stricter security requirements, more frequent assessments, and higher fees tied to their own risk management obligations. Taken together, these indirect, long-tail impacts create significant financial and operational strain, particularly for health systems operating in competitive markets or with already limited resources.

A Clear Path Forward: Maturity as a Financial Strategy

The latest data reinforces a simple truth: the cost of healthcare breaches remains high not just because of attacker sophistication, but because of program immaturity. Organizations that invest in visibility, alignment, rationalization, and early detection reduce breach lifecycle times and significantly limit downstream financial impact.

The most cost-effective cybersecurity strategy is not more tools. It is a mature cyber program, fully rationalized for better alignment with the business goal of protecting patient safety and operational resilience. When people, process, technology, and financial investment work in concert, breach costs drop, operational stability increases, and resilience becomes a competitive advantage.

Healthcare Can No Longer Measure the Cost of Inaction in Dollars Alone

Last year’s data makes it unmistakably clear that healthcare can no longer afford to view cybersecurity as a technical problem sitting on the periphery of operations. The financial impact of breaches is severe, but the deeper cost is the strain they place on clinical delivery, patient trust, workforce capacity, and organizational resilience. Every day a breach goes undetected, every hour systems are offline, and every dollar spent recovering from preventable disruption reflects a direct threat to the mission of safe, reliable care.

The real risk facing healthcare organizations is not the next attacker. It’s the continued reliance on underdeveloped, unaligned, and unprepared cybersecurity programs. More tools will not solve this challenge, and increased spending without strategic maturity will not change outcomes. What will make a measurable difference is a cyber program that is fully rationalized, integrated, and aligned with the fundamental business goals of patient safety and operational stability.

Organizations that invest in visibility, speed, resilience, and coordinated response are already seeing the benefits: shorter breach lifecycles, fewer operational disruptions, reduced financial exposure, and stronger trust from the communities they serve. Those that delay modernization will continue to face rising costs, extended downtime, and a risk profile that becomes increasingly difficult to manage.

2026 must be the year when healthcare stops treating cybersecurity improvements as optional or incremental and starts approaching them as essential to sustaining care. Cybersecurity in healthcare is no longer just a business function or an IT priority. It is a foundational element of patient safety, and the cost of inaction has never been higher.

Readers Write: 2026 Predictions: The Great Data Quality Reckoning in Healthcare IT

January 5, 2026 Readers Write 2 Comments

2026 Predictions: The Great Data Quality Reckoning in Healthcare IT
By Jodi Amendola

Jodi Amendola is executive advisor for the Supreme Group.

image

The healthcare IT industry has been playing the “Let’s Improve Interoperability!” game for what feels like decades.

Today, it’s CMS Aligned Networks, TEFCA, and information-blocking-rule enforcement. Yesterday, it was “Meaningful Use” and the HITECH Act. Before that, it was Regional Health Information Organizations and HL7.

While these efforts to improve interoperability have certainly been laudable, they’ve obviously been lacking, because we’re still talking about the problem. A recent report from KLAS Research on the state of EHR interoperability today offers some helpful context:

  • While patient records are more available than ever, clinician satisfaction with external integration remains poor.
  • Clinicians continue to grapple with issues like duplicative records, inconsistent formats, and poor data mapping, which limit the clinical value of shared data.
  • Participation in data-sharing networks by EHR vendors has increased, but data usability has not.

The last point is critical, as all the hope about AI in healthcare will go unrealized without a foundation of accurate, comprehensive patient data for AI to base its decisions and recommendations on.

In the coming year, the healthcare industry will continue to grudgingly come to terms with a difficult truth: Interoperability means very little without connectivity. Issues highlighted in the KLAS report, like duplicative patient records and fragmented medical histories, undermine cost and quality improvement efforts and lead to suboptimal patient outcomes.

As a result, when it comes to communicating with the clients and prospects, health IT vendors will need to not only emphasize their role in delivering better interoperability, but also in improving the accuracy and usability of patient data.

It will also mean preparing for greater scrutiny, harder questions from media and industry analysts, and the need to demonstrate real value rather than aspirational promises.

To get ready, it’s important to ensure that PR and marketing do the following:

  • Elevate proof over promises. With key influencers and decision-makers growing more skeptical about lofty promises, every claim needs to be backed with facts and statistics. Punchy copy is great, but hard data, case studies, and third-party research carry more weight.
  • Highlight how data quality delivers clinical value. It’s not enough to merely talk about how your organization enhances interoperability. Instead, how does it bolster data integrity, eliminate duplicative records, improve outcomes, or build clinician trust? Offer clear, measurable examples of your technology’s clinical impact.
  • Focus messaging on responsible AI enablement. Solid data is the difference between “quality in, quality out” and “garbage in, garbage out” when it comes to AI. Accordingly, health tech marketing should strive to position your organization as an industry champion of the accurate, complete, transparent data that is needed to drive responsible and reliable AI insights.

In 2026, it’s less about expanding the pipes of healthcare data, and more about increasing the quality of the information that flows through them. As expectations and scrutiny around data quality grow, organizations that ground their communications in evidence, clarity, and responsible innovation will stand out.

Readers Write: Application Portfolio Management: The Hidden Key to Healthcare Cybersecurity Resilience

December 22, 2025 Readers Write Comments Off on Readers Write: Application Portfolio Management: The Hidden Key to Healthcare Cybersecurity Resilience

Application Portfolio Management: The Hidden Key to Healthcare Cybersecurity Resilience
By Kevin Erdal

Kevin Erdal is president of advisory services at Nordic.

image

Healthcare leaders are navigating a tough reality: protecting margins while making operations more resilient. Financial pressures, workforce shortages, and regulatory complexity mean every investment must deliver real, measurable impact.

At the same time, cyber threats are amplifying these pressures. A single breach can wipe out hard-won savings, derail transformation projects, and compromise patient safety.

In this environment, application portfolio management (APM) is a strategic necessity.

Think of APM as a smarter way to manage your technology stack. By taking inventory, trimming what you don’t need, and securing what you keep, you can cut waste, reduce risk, and lay the groundwork for streamlined, patient-centered operations without adding complexity.

What are the risks of ignoring application portfolio management?

Healthcare is the most expensive sector for cyberattacks, with the average breach costing $11 million, three times the global average. Ransomware is the most prevalent threat, accounting for approximately 70% of healthcare cyberattacks. In 2024 alone, 118 confirmed ransomware attacks accessed more than 15 million patient records.

The operational impact across our industry is staggering:

  • 17 days of average downtime per ransomware incident, costing $1.9 million per day.
  • 92% of healthcare organizations targeted by cyberattacks in 2024.
  • $21.9 billion in downtime losses over six years.

Most importantly, the risk to patient safety can’t be overstated. When systems fail, care delivery is disrupted, treatments are delayed, and lives are at risk.

Why traditional cybersecurity isn’t enough

Most healthcare organizations rely on perimeter defenses like firewalls, VPNs, and intrusion detection systems, but attackers often exploit internal vulnerabilities, especially through unmonitored legacy applications and shadow IT.

If you don’t know what’s running in your environment, you can’t protect it. And you may be paying for apps you don’t even use.

What is application portfolio management (APM)?

Application portfolio management is the structured process of managing applications based on value, cost, risk, and performance. It includes:

  • Inventory and classification of all your applications.
  • Risk and value assessment to understand security posture and business impact.
  • Lifecycle and rationalization planning to retire redundant or high-risk apps

Done right, APM is a strategic enabler for efficiency, modernization, and cost control.

How does APM deliver real ROI?

APM allows you to clean up your tech stack and create significant wins across your organization.

  • Visibility = control. You can’t secure what you don’t know exists.
  • Risk prioritization. Spot high-risk apps before they become breach entry points.
  • Legacy exposure mitigation. Retire unsupported apps before attackers exploit them.
  • Cost savings. Rationalization reduces licensing, maintenance, and support costs.
  • Compliance confidence. Stay ahead of HIPAA and other regulatory requirements.
  • Foundation for innovation. Simplify before you modernize.

APM delivers value across the enterprise by aligning technology decisions with business, financial, and clinical priorities:

  • Chief information officers gain alignment between IT investments and strategic goals, paving the way for digital transformation.
  • Chief information security officers strengthen risk management and improve threat response.
  • Chief financial officers see hard ROI through cost savings and breach avoidance.
  • Chief medical information officers benefit from streamlined clinical workflows and better data integrity.

How to get started with application portfolio management

Here’s a practical roadmap for healthcare leaders:

  1. Start with an inventory. Capture every app across clinical and business functions.
  2. Map applications to workflows. Understand their role in care delivery and operations.
  3. Assess risk and compliance. Evaluate vendor security posture, data sensitivity, and HIPAA alignment.
  4. Rationalize and retire redundant or risky apps. Reduce attack surface and technical debt.
  5. Integrate APM insights into governance programs. Embed findings into cybersecurity strategy and IT planning.

How the right partner accelerates APM success

Finding redundant apps is just the start. The real challenge is managing governance, staying compliant, and retiring systems without disrupting care or losing critical data. That’s where the right partner can help. Experienced healthcare IT advisors bring proven, scalable frameworks and tools to make the application portfolio management process faster and safer.

Partnering gives you the structure and support to reduce risk, achieve measurable ROI, and build a solid foundation for future innovation.

Bottom line: APM is foundational to cybersecurity resilience

Cyber threats and digital complexity aren’t slowing down, and neither can you. Application portfolio management is one of the most practical, high-impact steps you can take to strengthen cybersecurity, protect margins, and build a foundation for future-ready operations.

The cost of doing nothing? Higher risk, wasted resources, and missed opportunities. The upside of acting now? You simplify your environment, reduce vulnerabilities, and free up capacity to deliver patient-centered care that’s safer and more efficient.

APM is a strategic lever for margin resilience, operational efficiency, and innovation. Start today and position your organization to do more with less while safeguarding your mission and the people you serve.

Readers Write: The Missing Clinical Voice in Healthcare IT

December 8, 2025 Readers Write 1 Comment

The Missing Clinical Voice in Healthcare IT
By Susan Grant, DNP, RN

Susan Grant, DNP, RN, is chief clinical officer at Symplr.

image

For years, the weight of healthcare technology decisions has fallen solely on IT teams, inadvertently leaving clinicians and IT operating in silos. Yet clinicians play a critical role in determining whether technology implementations succeed. Deloitte research shows that clinicians rate technology initiatives far more positively when we are actively involved, from design through implementation.

Despite this, only 38% of frontline clinicians report having been consulted on digital health workflows or new applications. We need to bring the clinical perspective into technology decisions earlier and  more consistently. With physician use of AI already up 78% from 2023, clinicians both want and deserve a larger role in shaping these conversations.

The value of clinical input
Health systems must engage across departments, from IT to executives and clinical teams, to deliver successful technology implementations. Nurses alone make up the largest segment of the healthcare workforce. Because clinicians directly experience the problems that many solutions aim to solve, they offer essential insights that should guide decision-making.

Cross-functional communication is equally critical. Open discussions about technology challenges and workflow pain points help to align around the shared goal of streamlining work so that providers can focus on patient care. These conversations also allow IT professionals to demonstrate the benefits of new tools early, reducing resistance and building confidence that the technology reflects clinicians’ needs.

Historically, clinicians have too often been excluded from these conversations, leading to painful rollouts, misaligned expectations, and limited influence over tools designed for them. Bringing the clinical voice to the table can change all of that.

Clinicians want to be more involved

Clinicians want to play a bigger role in healthcare technology decisions. Our 2025 Compass Survey shows that 85% of clinicians want more influence over software purchasing decisions, up from 72% last year and 51% in 2022. This trend shows that care teams no longer view technology and innovation as strictly an IT responsibility. They recognize the value technology brings to their daily work and to delivering optimal care.

IT and operations professionals also acknowledge the advantage that clinicians bring to these decisions. Both groups show increased interest in clinician involvement. This year’s survey found that 77% of operations leaders and 76% of IT teams actively seek clinician participation.

What’s next?

Organizations are seeking to implement technology that improves care delivery, including AI and scheduling tools. Ensuring that clinicians participate throughout the full implementation process prevents problematic deployments and increases ROI. As a former nursing leader at large health systems, I’ve seen the direct positive impact digital tools can have on clinicians, saving time, reducing stress, and ultimately improving the healthcare experience for patients.

We are in the midst of a clinical shortage, with the National Council of State Boards of Nursing reporting that 40% of RNs intend to leave the field in the next five years. Ensuring that clinical voices guide technology decisions can improve daily life for this workforce.

Strengthen alignment and communication

Healthcare leaders can take several approaches to address this issue. Teams should begin by aligning on central priorities across clinical and IT groups to foster communication and gain a better understanding of each other’s goals. While they may have different priorities, both sides share the guiding objective of improving patient care.

Leadership should demonstrate the value of technology upfront to strengthen clinicians’ trust. After facing so many initiatives that have not helped, clinicians need concrete examples of how new tools can make their jobs easier.

To increase clarity and confidence in new tools, leadership should also provide comprehensive training and education for the healthcare workers who will use them. This approach offers transparency and addresses change fatigue, helping differentiate new technology rollouts from earlier efforts that left clinicians burned out.

Opening the lines of communication in a continuous and intentional way can transform how systems operate. When leaders gather clinical input before decisions and continue the conversation post-rollout, they increase collaboration, elevate clinician voices, and improve the success of each initiative.

Learn from past experiences

To share a personal example, in a previous role I saw nurses become frustrated with a new AI tool because incoming messages disrupted their communication with other providers. A simple conversation could have revealed this problem sooner. But because consideration of ongoing feedback was not a part of the post-implementation plan, no one realized that the tool designed to help them was instead creating more work.

When healthcare organizations use these strategies and place greater value on the clinical experience, they create a culture of innovation and collaboration that increases enthusiasm for change and avoids overpromising and underdelivering.

Text Ads


RECENT COMMENTS

  1. “In extreme cases, I’ve seen vendors refuse to be on calls with the note-taking took turned on.” Rarely see a…

  2. “We build the board a house of glass and pray the question’s never asked.” Oof! I wouldn't want to be…

  3. ChatGPT reactions to the WSJ article on the flatulence metric: Finally, a wearable that really tracks your *gas mileage*. “Prodigious…

  4. Healthcare data sucks - that song turned my Friday to Friyay!!! Gave me the much needed boost to get through…

  5. Or, as Tom and Ray of "Car Talk" used to say: "Unencumbered by the thought process."

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.