Home » Readers Write » Recent Articles:

Readers Write: A Global Perspective on Advancing Precision Medicine with Genomic EHR Integration

March 18, 2026 Readers Write No Comments

Readers Write: A Global Perspective on Advancing Precision Medicine with Genomic EHR Integration
By Jennifer Ford

Jennifer Ford, MBA is manager of clinical product management and genomics at Meditech.

image

The promise of precision medicine is simple, using genetic data to identify the best treatment for each patient as quickly as possible.

During my travels to South Africa and Namibia, healthcare leaders in both urban and remote areas shared enthusiasm for the role of EHRs in incorporating genomic data to guide treatment decisions. However, it also made wonder that if the passion for advanced technologies like genomics is so universally embraced, then what barriers are holding us back from widespread adoption?

The Challenges of Adopting a Precision Medicine Program

Despite its promise, adoption of genomics and precision medicine has been slow. Several challenges, both real and perceived, are hindering its adoption:

  • Costly testing. While the costs of personal genetic testing have declined significantly in recent years, many patients still face difficulties accessing genetic testing due to high out-of-pocket costs and limited or no coverage.
  • Limited availability of testing. Not every health system offers this type of testing, either due to a lack of local testing facilities, insufficient funding, or the absence of a service line.
  • Lack of understanding testing value. Many healthcare providers are unfamiliar with the use of genomics in diagnosis and treatment, particularly those working in environments where genomic data is not a prevalent part of the EHR.
  • Lack of EHR Integration. Providers often don’t have access to this data within their EHR workflows, and if they do, it is as a static document that is attached to the patient’s record and is too cumbersome to sift through.
  • Result data is not actionable. The lack of standardized clinical alerts or decision-support systems that incorporate genomic data means providers may lack the tools or training to make genomically informed decisions.
  • Testing is reserved for academia. Precision medicine remains more prevalent in academic and research centers than in community-based health systems, where most care occurs.

These challenges and misconceptions often stem from experiences that predate the integration of genetic data into the EHR, but the paradigm can change.

Overcoming the Challenges of Adopting a Precision Medicine Program

I’ve worked with healthcare leaders who are integrating genomics into the EHR. The result has been that when genetic data is ingested discretely into the EHR, clinical alerts become available for each patient based on their genetic information, enabling personalized patient care.

Genetics is not just for academic centers. I’ve seen the value that community hospitals gain when patients receive genetically-led services locally rather than traveling to larger academic medical centers. By equipping community clinics with a user-friendly, plug-and-play solution, they can focus on translational research that will lower costs, improve accessibility, and achieve better patient outcomes.

The Benefits of Adopting a Precision Medicine Program

The benefits of genomics in healthcare are becoming increasingly clear. The use of genomic data extends beyond cancer treatment, as health systems are using it to improve behavioral health treatment, newborn and pediatric care, and health and weight management. Having effective technology that can analyze genomic data to provide clinical support empowers clinicians to deliver more targeted patient treatment and support population health objectives. Adopting a genomics program can also support service line growth.

Global Precision Medicine Initiatives

Various initiatives worldwide are bringing genetic testing to the forefront of healthcare. Each area of the world faces distinct challenges related to geography, patient demographics, and scaling testing opportunities.

In South Africa and Namibia, healthcare leaders shared their desire to improve access to genetic testing in African nations. To reduce costs and maximize the benefits of genomic data, they are experimenting with leveraging social determinants of health to identify and prioritize patient cohorts to whom they will deploy testing. Where technological infrastructure may be limited, national labs are looking for ways to more equitably transport and perform testing from remote villages using drones, satellite internet services, and other technologies.

In England, the National Health Service (NHS) announced a £650m investment to provide every baby in England with DNA screening to identify potentially fatal diseases and to offer personalized healthcare as part of the government’s 10-year plan. The NHS recognizes that when patients receive personalized healthcare to prevent ill health before symptoms begin, it will reduce the pressure on NHS services and help people live longer, healthier lives. In the US, a similar approach has been announced in Florida’s Sunshine Genetics Act, which funds newborn genome sequencing pilots. These efforts are helping shift the paradigm toward proactive, personalized healthcare.

In Maryland, Frederick Health operates a dedicated precision medicine and genetics clinic that uses genomic data for precision medicine in behavioral health and beyond. In a Scottsdale Institute presentation, they shared how they addressed cost concerns by negotiating testing costs with laboratories and started a rapidly growing clinical trials program. They use genomic data to identify patients for clinical trials, increasing enrollment and improving care. They have found that moving clinical trials into the community hospital space increased revenue.

Ontario Shores Center for Mental Health Services in Canada announced that it would offer free pharmacogenetic testing of eligible patients to improve outcomes. The testing is initially focused on improving the treatment of patients who are admitted with schizophrenia or schizoaffective disorder, with plans for future expansion to use pharmacogenomics in behavioral health management.

Final Thoughts: Adopting Precision Medicine in Clinical Care is Essential

The more that genetic data is integrated into the EHR, the faster widespread deployment will occur. As clinicians find meaningful utility in genetic data, the importance of a strong precision medicine program shifts from a nice-to-have to a must- have. The key factor is how the EHR can leverage genetic data to improve patient outcomes.

As applications for genetic data evolve, an established genetic program becomes essential to improving physician satisfaction by empowering them with the advanced tools that they need to provide the best possible patient care.

Readers Write: When the Cloud Becomes the Attack Surface

March 18, 2026 Readers Write No Comments

When the Cloud Becomes the Attack Surface
By Brian McManamon

Brian McManamon, MBA is general manager of managed security and managed cloud services at Clearwater.

image

Healthcare organizations often talk about cloud as though it is a destination. In reality, for most hospitals, it has become an operating layer that keeps expanding.

That expansion did not usually happen through one formal strategy. It happened incrementally through SaaS adoption, remote access, vendor integrations, analytics tools, backup environments, and acquisitions. What many organizations now manage is not a clean cloud migration, but a hybrid environment made up of on-premises systems, cloud platforms, and third-party services that are tied together through identity and connectivity.

That matters because the cloud is no longer just part of the technology stack. In many environments, it has become part of the attack surface.

For many hospitals, “moving to the cloud” does not mean shutting down the data center and rebuilding everything as cloud-native. It usually means adding cloud services around existing operations. Clinical and business systems may still sit on-premises while identity, disaster recovery, remote access, analytics, and collaboration tools increasingly depend on cloud services. SaaS expands the footprint even further, often without being treated internally as part of the organization’s cloud environment.

That is where risk begins to grow quietly.

One of the most common misconceptions is that cloud is secure by default because the provider is secure. Major providers such as AWS, Azure, and Google Cloud invest heavily in securing their platforms. What they do not secure is each customer’s implementation.

Hospitals still own the responsibility for identity, configuration, access controls, logging, monitoring, and governance. If those areas are weak, cloud adoption can expand exposure faster than teams realize.

The opposite misconception is also common. Some organizations assume that keeping critical systems on-premises limits cloud risk. In practice, many of those same organizations have already adopted cloud identity, SaaS, remote vendor access, and external integrations. They have become hybrid whether they planned to or not. The difference is that they may not be managing that reality with a clear operating model.

Hybrid itself is not the failure. It is normal. In many cases, it is the natural result of smart teams making practical decisions over time.

A department adopts a new SaaS platform. IT centralizes identity. A cloud backup initiative begins. A new analytics platform is introduced. An acquisition brings another tenant, another domain, or another set of inherited tools. None of those decisions is inherently problematic. The problem is that governance and visibility often do not scale at the same pace.

That is when the cloud starts to become the attack surface.

The risk shows up first in identity. In hybrid healthcare environments, identities increasingly function as the control plane. Privileged roles accumulate. Service accounts remain active without clear ownership. Exceptions to MFA or conditional access persist longer than intended. Shared administrative access and standing privileges expand the potential blast radius of a single compromise.

An attacker no longer needs to move through the environment in the old ways if they can come through a valid account, exploit a policy exception, or take advantage of weakly governed permissions in a cloud-connected system.

The problem is compounded by visibility gaps. Many healthcare organizations do a strong job monitoring endpoints and network activity, yet cloud signals often remain fragmented. Logs may live across multiple consoles, subscriptions, tenants, and SaaS environments. Security teams may be watching the perimeter closely while missing critical changes in role assignments, application permissions, data shares, or service account behavior.

When those signals are not centralized and correlated, detection slows down. In some cases, it never happens at all.

Data sprawl adds another layer of risk. Healthcare environments generate copies of sensitive data for backups, archives, exports, analytics, and testing. Over time, protected health information can end up in more places than intended, sometimes with broader access and weaker protections than production systems. The issue is not only where the data started, but where it moved, who can reach it, and whether that movement is being governed consistently.

This is why cloud security in healthcare cannot be treated as a narrow infrastructure question. It is a governance question, an identity question, and ultimately a resilience question.

Cloud can improve resilience, but only when it is designed deliberately. Redundancy, scale, and operational flexibility can be real advantages. But those advantages weaken quickly if identity becomes a single point of failure, if disaster recovery exists only on paper, or if dependencies across cloud, SaaS, and legacy systems are not fully understood. In a hospital, resilience is not just uptime. It is the ability to support patient care when systems are under stress.

Good governance in that environment does not mean a large policy binder sitting on a shelf. It means a small number of clear, enforceable standards.

Hospitals need defined ownership for subscriptions, accounts, and services. They need baseline guardrails that prevent unsafe defaults. They need identity governance that prioritizes least privilege, manages non-human identities, and reviews exceptions regularly. They need enough centralized logging and alerting to see meaningful changes in the environment and act on them.

Most importantly, governance has to work in a 24/7 clinical setting. That means building models that support urgent care delivery without abandoning accountability. Exceptions may be necessary, but they should be time-bound, documented, owned, and reviewed.

The cloud is not the problem by itself. Unmanaged cloud is.

For healthcare leaders, one of the most useful next steps is a practical reality check. Inventory the tenants, subscriptions, service accounts, and privileged identities that are already in use. Confirm ownership. Review standing administrative access. Identify where visibility into cloud activity is missing. In most organizations, the attack surface has expanded gradually enough that no single decision created the problem. That is exactly why it deserves attention now.

In healthcare, the fundamentals still apply. Know your environment. Govern identity and access. Maintain visibility into critical systems and data flows.

The cloud becomes dangerous when organizations stop treating it as infrastructure and start assuming it will govern itself.

Readers Write: Healthcare’s Quietest Financial Problem

March 9, 2026 Readers Write 1 Comment

Healthcare’s Quietest Financial Problem
By Reed Liggin

Reed Liggin, RPh, MBA is co-founder and CEO of SlicedHealth.

image

Payer contracts are negotiated with extraordinary care. They often involve detailed financial modeling, language review, and extended debate among finance and managed care leaders who understand that the margin implications extend beyond the current fiscal year.

By the time an agreement is signed, the organization usually understands clearly what it expects to be paid and how those numbers fit into broader margin targets.

What is less clear, in many cases, is whether those expectations hold once the contract is operational. Negotiation is a focused event. Execution is an ongoing process that depends on claims configuration, payer adjudication logic, and a long list of small decisions that rarely receive executive attention. The contract may be sound, yet its performance in practice can drift in ways that are difficult to detect without deliberate review.

That is where healthcare’s quietest financial problem tends to live.

Most reimbursement misalignment is not dramatic. It does not look like a denial spike or a system outage. It looks like an almost-right payment, then another that is almost right, and then a few thousand more that are almost right. Those are the hardest errors to spot because they do not feel like errors.

The reason is simple: the contract is rarely a single rate. The contract is a set of conditions.

A surgical case might be paid under a base methodology, while the implant follows a different rule. A drug might be carved out if the NDC is present and paid under a different schedule if it is not. An outlier threshold might apply only after a cost calculation that depends on how charges were mapped and which revenue codes were recognized. A quality adjustment might be effective on paper in January, effective in the payer’s configuration in March, and not visible on the remittance in a way that makes it easy to reconcile.

Because the discrepancies are usually small, they tend to be absorbed into ordinary variance explanations. Margins fluctuate for many reasons. When reimbursement is directionally correct, it is easy to conclude that the contract is performing as intended. The temptation is to move on, because there is always something louder competing for attention. Over time, however, small differences across high-volume services can accumulate in ways that are more consequential than any single remittance would suggest.

The structure of most organizations reinforces this pattern. Contract negotiation is concentrated within managed care and finance leadership. Payment posting and denial management sit within revenue cycle. Financial performance is reviewed at an aggregate level.

Each function operates responsibly within its own scope. The precise alignment between negotiated language and adjudicated payment exists somewhere between those scopes, which makes it harder to see and harder to measure consistently.

Sampling can confirm that the world is not on fire, but does not reliably detect systematic drift across a high-volume population, especially when the drift is small and distributed across service lines, modifiers, and carve-outs. Those issues do not typically surface through a single appeal or an isolated audit finding. They reveal themselves gradually, if at all.

A practical constraint is that appeal timelines move quickly. Reconstructing the intent of a negotiated provision after months of operational activity is not simple. By the time a pattern becomes visible, the administrative effort required to pursue it may outweigh the perceived benefit, especially when the variance per claim appears modest. The economics of attention often favor larger, louder problems.

Healthcare finance is disciplined in many ways. Budgets are scrutinized. Forecasts are refined. Variance explanations are debated. But reimbursement accuracy, when it is mostly right, rarely commands the same intensity.

The difficulty is that reimbursement is foundational. When performance is directionally aligned but not exact, the difference can remain invisible inside aggregate results.

None of this implies widespread failure. It reflects the increasing complexity of reimbursement and the reality that operational systems interpret legal language through their own logic. Small gaps are easier to tolerate than large ones, and quiet gaps are easier to overlook than noisy ones.

In an environment where margins are narrow and expectations are high, quiet misalignment has a way of persisting longer than anyone would prefer. It does not demand attention. It rarely introduces itself. It simply continues in the background, one remittance at a time.

The contract ends with signatures. Its performance unfolds slowly, in details that are easy to assume and harder to verify. That space between intention and execution is where healthcare’s quietest financial problem tends to reside.

Readers Write: How AI is Helping Providers Navigate Regulatory Uncertainty

March 4, 2026 Readers Write Comments Off on Readers Write: How AI is Helping Providers Navigate Regulatory Uncertainty

How AI is Helping Providers Navigate Regulatory Uncertainty
By Mindy Fortson

Mindy Fortson is COO of Experian Health.

image

Healthcare organizations have always had to navigate change. But lately, it may feel like the ground is constantly shifting, with The One Big Beautiful Bill Act (OBBBA) being the latest example.

While its full impact won’t be felt until next year, many providers who were surveyed say that they are not prepared and expect major challenges with eligibility and billing.

For many revenue cycle teams, this uncertainty may be creating real anxiety in day-to-day operations for staff who are already overextended. They must prepare for stricter eligibility checks, increased reporting mandates, and the likelihood of more patients cycling in and out of coverage. Providers are turning to AI not as a futuristic concept, but as a stabilizing force that brings consistency, clarity, and efficiency to increasingly complex operational demands.

Providers are asking these questions about OBBBA, and responsible AI-driven support may help answer them.

How Will Providers Know Which Patients Are About to Lose Coverage?

One of the most immediate concerns surrounding OBBBA is coverage volatility. The Congressional Budget Office anticipates that 11.8 million individuals could lose health insurance over the next decade due to policy changes, including new community engagement requirements and more frequent eligibility reviews. For providers, the challenge is not only that coverage may change, but that it may change quickly and unpredictably, creating instability at the front end of the revenue cycle.

That raises an urgent operational question: How do we identify coverage risk early, before it turns into denied claims or uncompensated care?

Many eligibility workflows are manual and fragmented and were not built for this level of change. Revenue cycle teams are already balancing staffing shortages, rising claim denial rates, and growing payer complexity.

AI solutions can help providers build more consistent operational foundations. They can identify accurate patient information, automate eligibility and insurance discovery checks, flag incomplete documentation, and lessen the burden of manual tasks.

Denials are likely to become a bigger pressure point under OBBBA, especially when coverage status changes between scheduling, registration, and billing. When teams are already stretched thin, even small documentation gaps can quickly turn into delayed reimbursement and rework. Operational consistency will be one of the most important safeguards providers can build in the years ahead.

Are Providers Ready for a Surge in Self-Pay and Uncompensated Care?

OBBBA is expected to increase the number of patients moving into self-pay categories, which is a group that already represents the highest share of bad debt write-offs. Loss of coverage doesn’t mean that patients stop needing care. But it does mean that providers face greater financial unpredictability.

Providers are asking: How do we maintain financial stability while patient responsibility grows?

Many providers need more reliable ways to understand patient populations, anticipate payment challenges, and engage patients with clearer payment options earlier in the process. AI-driven solutions can bring structure to this complexity by analyzing large amounts of patient data, demographic indicators, and billing patterns to support segmentation and reduce guesswork in collection strategies.

These tools can also identify potential charity eligibility and help providers better anticipate which patients may struggle to pay. Many providers need more predictable workflows for both staff and patients in an increasingly uncertain coverage environment.

How Will Providers Navigate Additional Operational Complexities?

OBBBA introduces a new layer of operational complexity. Each state will implement the provisions of this law differently. Providers will need to understand both state and federal rules to ensure compliance. Eligibility may hinge on employment hours, participation in training programs, or exemption status that can change month to month. Documentation may be incomplete, delayed, or interpreted differently across states.

For providers, the question becomes: How do we confirm coverage status with confidence when eligibility itself is more dynamic?

The risk isn’t only that patients lose coverage. It’s that coverage appears active at one point in the process and changes before a claim is adjudicated. That creates exposure to retroactive terminations, denials, and rework that strain staff.

Managing this kind of volatility requires more than manual verification. AI can help monitor eligibility timelines, flag missing or inconsistent documentation, and prompt earlier intervention when redetermination windows approach.

In addition, providers need access to broader, more complete data than a single insurance record. They will need to know the correct order of benefits if a patient has more than one insurance and whether they are likely to qualify for Medicaid if they appear uninsured. Eligibility may increasingly depend on data elements that providers have not traditionally needed to consider, like employment status or volunteer activities and income verification.

AI can help pull together these disparate data points and support more consistent front-end decision-making, especially when eligibility is dynamic and documentation requirements are evolving.

As implementation unfolds, operational consistency will depend on building workflows that can adapt to these requirements without adding unnecessary friction for staff or patients.

Preparing Now Means Building Stability into Core Workflows

Providers don’t need every answer today, but they do need to be asking the right questions:

  • Which patients may fall through coverage gaps?
  • How will self-pay growth change financial exposure?
  • Where are administrative processes most vulnerable?

In a time of constant change, providers are searching for stability and workflows that are clearer, more consistent, and less reactive. AI, applied thoughtfully and responsibly, can help bring that stability into the revenue cycle. This technology is one of the best ways to ease administrative strain and help staff focus on what matters most.

Readers Write: All I Needed to Know to Disrupt Healthcare I Learned from “Seinfeld,” the Epilogue: The Summer of George

March 4, 2026 Readers Write 2 Comments

All I Needed to Know to Disrupt Healthcare I Learned from “Seinfeld,” the Epilogue: The Summer of George
By Bruce Brandes

Bruce Brandes, MBA is co-founder and board chair of WhaleHawk and CEO of Mindyra Health.

image 

In 2014-15, I authored a seven-part blog series at the encouragement of Mr. HIStalk to reflect on my years of lessons learned in this industry through the satirical but surprisingly parallel lens of the greatest sitcom of all time.

Posts like Do The Opposite, And You Want to Be My Latex Salesman, and Yada Yada Yada were intended to reorient the mindset of how healthcare solution companies approach their go-to-market activities.

Similar to my TV friend Larry David as he wrapped “Curb Your Enthusiasm,” over a decade later, I felt compelled to pen this as a bit of an epilogue to my old HIStalk series, while also illuminating a next-generation path forward as we rethink commercial relationships in healthcare.

image

Unlike George Costanza, I was not special enough to get hired at PlayNow, nor was I Penske material. Instead, over the past 10 years, I’ve been fortunate to have had firsthand experience in growing transformational healthcare companies, including Livongo, Teladoc, Care.ai, and Stryker.  

Operating a healthcare organization has never been more difficult. Financial pressures, dizzying technological advances, workforce challenges, and daily policy uncertainty are among a litany of existential issues. Consequently, every solution company needs to completely reimagine how it discovers, approaches, engages, and closes new business. More importantly, the focus on value, outcomes, and building enduring relationships is paramount. Who knows more about enduring relationships than Jerry, Elaine, George, and Kramer?

Through “Seinfeld” wisdom, combined with my career journey, I’ve developed an understanding of how healthcare executives prioritize investments, navigate buying decisions, and set partnership expectations. Moreover, I’ve discovered the secrets, strategies, and tactics of successful solution companies and their most effective sales leaders and account reps.  

Go-to-market in healthcare takes too long and costs too much. Reps commonly prioritize the wrong accounts, engage at the wrong time, and make a pitch that sounds like everyone else’s and is more focused on what they want to sell than the problems their prospect seeks to solve.

Like George Costanza’s invention of the IToilet (only “Curb” loyalists may get that reference), the power of agentic AI and a treasure trove of digitized industry data are creating a better way to make your life easier.

I see many healthcare sales reps using ChatGPT, Claude, Gemini, etc. to help them conduct market and account research. My question is, does this actually help or hinder a solution company’s go-to-market success?

Are we simply accelerating unwanted outreach? Does rogue, individualized use of generic LLMs exacerbate the inconsistency of a company’s approach and messaging? Is decision-making in healthcare different enough to warrant a more customized approach?

I contend that using generic LLMs for some research is OK, but the findings are superficial and insufficient if you aspire to improve the overall ROI you are getting on your sales and marketing investment.

We must train LLMs to more deeply understand how selling and decision-making in healthcare is different from any other industry. Sales cycles are long because, more often than not, the optimism of a sales rep does not reflect the realism facing buyers. LLMs must be customized to create sales acceleration agents that are deeply trained in our industry dynamics, on each specific account, and on each individual decision maker, contextualized to the unique solution and best practices.

Three key agentic deliverables will ensure the focused, efficient path to growth every company seeks while enabling a more collaborative relationship with clients.

Know WHERE to Go

Is your go-to-market plan rooted in legacy marketing investments, dated market data subscriptions, and antiquated sales enablement tools? Smarter market segmentation must refine your ideal customer profile at a much deeper level than “academic medical centers” or “community hospitals with 250+ beds.” Real data intelligence is informed by patterns across an array of less obvious variables, such as operating metrics, financial trends, workforce dynamics, governance, leadership histories, community influences, etc. so you don’t waste time chasing accounts that will likely never make a buying decision.

Know WHEN to Engage

How well do you understand the priorities of your prospects and honestly assess your solution’s relevancy, respectfully not persisting when your offering is not a fit or the timing isn’t right? A custom healthcare LLM can continuously monitor tens of thousands of digital healthcare-specific data sources — across government reporting, podcasts, industry news, policy trends, videos, clinical journals, financial filings, and social media — and correlate those insights with the context of your value proposition. That allows you to be the first to make timely connections when a potential buyer would be most receptive to your outreach.

Know HOW to Win

Are all of your reps consistently engaging in a way that is hyper-personalized, but rooted in your proven best practices? Too often, companies lead with spam emails, unwanted LinkedIn messages, trade show chocolates, texts, and unsolicited calls that waste time and money while detrimentally littering our industry and damaging your brand. Proper use of modern agents will create customized playbooks that guide informed, personalized conversations and organizational insights that demonstrate your diligence and expertise that will save time for your best reps to manage more accounts and ensure that every rep performs more like your best reps.

image

George Costanza once warned that “A George divided against itself cannot stand.” Take heed, and rethink how engaging healthcare-specific, custom LLM-trained agents can reduce ineffective sales and marketing efforts to catalyze a new approach for growth, leading to a less-cluttered industry and better outcomes for all.

Readers Write: AI Can’t Feel Emotions, But It Can Be Designed to Care

March 4, 2026 Readers Write Comments Off on Readers Write: AI Can’t Feel Emotions, But It Can Be Designed to Care

AI Can’t Feel Emotions, But It Can Be Designed to Care
By Richard Mackey

Richard Mackey, MBA is CTO at CCS.

image

AI-assisted chronic disease management is becoming a reality. Some of the biggest AI companies have set their sights on healthcare with the launch of solutions like ChatGPT for Health and new personal health data management tools like those offered by Claude from Anthropic.

Chronic diseases like diabetes, heart disease, and depression require not just medical oversight, but consistent engagement, trust, and behavioral support. AI tools are starting to offer just that, both inside and outside of the traditional care environment.

Still, if those AI interactions feel cold, impersonal, or judgmental, they can drive disengagement, the opposite of what’s needed to improve long-term patient outcomes.

Done poorly, AI can amplify the very problems that it is supposed to solve. Done well, empathetic AI becomes a force multiplier, extending the reach of human care, building trust at scale, and helping people feel supported, even when interacting with a “machine.”

When Empathy Is a Design Challenge

Empathy in AI isn’t about simulating emotions or pretending to be human when it’s not. AI shouldn’t try to be human, but it does need a native understanding of the emotional context of the interaction and an ability to respond in a way that feels respectful, supportive, and authentic. In other words, empathy in AI is a design problem, one that spans data, UX, language, and intended purpose.

Consider the example of a patient managing type 2 diabetes. If a patient stops using their continuous glucose monitor, a typical automated system might flag it as noncompliance. But an empathetic AI agent that is trained not just to process the data but also to understand human behavior might recognize subtle signals in the data that indicate emotional burnout or socioeconomic barriers, and adjust the tone of outreach accordingly. That could mean offering reassurance instead of reminders, or escalating the case to a human clinician or social worker for follow-up.

Striking the right tone and balance in the design of communication with the agent, seeking to understand or offer encouragement, for example, will make a meaningful difference in whether a patient reengages or shuts down.

The ROI of Empathy

In value-based healthcare, where providers and health plans are financially accountable for outcomes, empathetic AI that is embedded in chronic disease management workflows can have measurable impact. AI can use sentiment analysis or behavioral cues to help identify patients who are at risk of disengagement or decline, triggering proactive interventions from human outreach staff.

AI can also handle routine administrative tasks with appropriate tone and timing and without clocking out at the end of an eight-hour shift, freeing up human clinicians to focus on complex, relationship-based care that fosters engagement and sustains motivation.

The result is fewer hospitalizations, higher therapy adherence, improved satisfaction scores, and ultimately, better chronic experiences and better health outcomes at lower cost.

Designing for Trust in the Age of Automation

As AI becomes more embedded in the healthcare ecosystem, its ability to convey empathy in a transparent way must be a priority. Research has already shown that it’s possible, with human respondents identifying AI responses as more empathetic and engaging across scenarios ranging from crisis situationsand cancer care to everyday communications from healthcare providers.

The consumer world is quickly operationalizing this approach, with companies like beauty brand Sephora and airline Qatar Airways scoring accolades for their AI assistants’ optimal blend of digital efficiency, personalization tools, and engagingly empathetic personality. As companies like OpenAI and Anthropic turn their attention to healthcare, they are likely to lean into a similar empathy-first approach to assist individuals with healthcare-specific tasks.

The key to success will be maintaining transparency and trust in the AI-powered healthcare ecosystem as we leverage the technology’s seemingly near-limitless potential. The bottom line is that we don’t need AI to have feelings, but we do need it to understand ours, especially when and where support and care is needed most as a patient.

Readers Write: Healthcare’s Seasonal Surge is Upon Us. Is Your Health System Ready?

March 2, 2026 Readers Write Comments Off on Readers Write: Healthcare’s Seasonal Surge is Upon Us. Is Your Health System Ready?

Healthcare’s Seasonal Surge is Upon Us. Is Your Health System Ready?
By Dusti Browning, RN

Dusti Browning, RN, MSN is VP of growth and client solutions for Conduit Health Partners.

image

Seasonal surges happen every year, and 2026 is particularly brutal. The flu was already associated with 120,000 hospitalizations and 5,000 deaths by the end of 2025.

The winter months often bring with them a tidal wave of respiratory viruses, influenza, RSV, and COVID. Clinicians expect them. But while these spikes in patient volume are predictable, too many health systems find themselves in a challenging supply-and-demand environment that can negatively impact patient care and the bottom line.

A recent report found that 60% of nurses are experiencing a significant uptick in patient volume and case complexity amid the current flu season. As seasonal surges collide with ongoing emergency department (ED) overcrowding and staffing shortages, health systems face mounting pressure to find scalable, practical solutions.

The national report surveyed 64 nurses, half working in triage and half in transfer centers, and found that 70% of nurses believe that offering 24/7 virtual nurse triage prevents unnecessary ED visits. In fact, additional industry data points to an ED avoidance rate of 72 to 76% over the past two years, meaning nearly three out of four triage encounters are resolved without an ED visit.

While hospitals and health systems can’t eliminate seasonal surges, they can anticipate them and implement systems that reduce strain.

Protecting System Capacity Remotely

The report found the most frequent patient concerns during the seasonal surge include minor respiratory symptoms, medication management, chronic disease follow-up, and low-acuity infections. Around 75% of nurses report that remote solutions help manage these issues effectively. This is significant given the challenges facing health systems during seasonal surges. A separate study found that 35% of patients that present to an ED during the winter months wait four or more hours for a bed.

Safeguarding capacity in today’s EDs is an imperative, with stats from the Centers for Disease Control and Prevention (CDC) showing that 42.7 visits per 100 people start in the ED. As those numbers continue to increase, virtual nurse triage provides an alternative access point that is proven to reduce strain on health system EDs during seasonal surges.

Notably, the recent patient access and throughput report found that nearly one in three avoided emergency visits associated with nurse triage after regular clinic hours. This demonstrates that real-time clinical access can help patients reach the right level of care at times when they are more likely to turn to the ED. The end result is improved overall access to care, better outcomes, and lower costs. A measurable decrease in staff burden and burnout further strengthens the impact.

Enhancing Patient Experience

When seasonal outbreaks occur, capacity is at a premium, but so is staffing. Burnout continues to be rampant in healthcare. A recent survey conducted by The Harris Poll of 1,504 frontline health care employees revealed that 55% are looking for job openings, interviewing, or planning to switch to a new role in the next year.

While AI and automation are primed to ease administrative burdens in the coming years, the reality is that patients and families in distress often need to speak with a human being. When staff are lacking and already under immense strain, patient experiences are negatively impacted. Lengthy wait times to get to a professional or a frustrating technology-first approach can cause patients to turn to the ED out of desperation. Virtual nurse triage offers a more accessible, clinically appropriate alternative.

The patient access and throughput report found that roughly one in four nurses witness or suspect worsened outcomes due to delays in access or coordination. The findings reinforce the efficacy of virtual nurse triage to address operational challenges of seasonal surges and improve patient outcomes and experiences.

Readiness When Demand Peaks

The CDC predicts that flu activity could continue to rise in the coming weeks. Seasonal surges don’t have to mean bottlenecks and burnout. The data show what works: nurse-first, telephone triage reduces visits to the ED, eases the operational burden of overcrowded waiting rooms, and reduces the risk of worsened outcomes.

As health systems prepare for the next seasonal wave, integrating nurse triage into access pathways isn’t just operational. It is essential for protecting capacity, easing staff strain, and improving patient care.

Readers Write: Lessons from the ChatGPT Health Debate

February 23, 2026 Readers Write Comments Off on Readers Write: Lessons from the ChatGPT Health Debate

Lessons from the ChatGPT Health Debate
By Robert Stewart

By Robert Stewart is CTO of Arbital Health.

image

A recent column by Geoffrey Fowler in The Washington Post that describes his disappointing experience with ChatGPT Health sparked discussion in the health IT community. While many remain optimistic about the long-term potential of platforms such as ChatGPT Health and Claude for Healthcare, Fowler’s piece highlights issues that healthcare leaders, clinicians, and technologists should examine carefully.

Variability and inaccuracy are not unique to large language model (LLM)-based systems. Many clinical diagnostics have known false-positive rates, and repeat testing is routine when results are unexpected. Clinicians themselves may reach different conclusions when presented with the same clinical information months later. Medicine has always operated within a probabilistic framework.

What is different with LLM-driven systems is their non-deterministic behavior when given the same input repeatedly. Identical prompts can generate materially different responses. Fowler demonstrated this when ChatGPT assigned his cardiac health scores ranging from a B to an F using the same underlying data. That level of variability can cause confusion or anxiety when applied to personal health interpretation.

Many consumer health AI tools are built on retrieval-augmented generation (RAG) architectures, in which the model is grounded using user-specific information such as medical records or wearable device data. Even when anchored to structured inputs, however, the LLM’s narrative interpretation can still vary, reinforcing the need for clinician oversight and appropriate guardrails when deploying these tools in consumer health settings.

It’s also important to recognize the potential psychological impact of these tools. Researchers such as Eric Topol caution against indiscriminate screening of asymptomatic individuals because it often produces “incidentalomas,”(findings that lead to unnecessary follow-up testing or treatment without improving outcomes. Consumer AI health scoring systems risk amplifying this phenomenon by continuously surfacing probabilistic interpretations in the absence of appropriate clinical context.

Wearable Data Challenges

Wearable device data introduces another layer of complexity. Anyone who works with longitudinal wearable datasets understands that the signal-to-noise ratio is inconsistent. Devices are removed for charging, replaced every few years, or switched across vendors that have different calibration baselines. Environmental and behavioral factors such as travel, altitude changes, illness, stress, or sleep disruption can produce statistically significant physiological changes that an AI system may misinterpret without broader context.

Jessilyn Dunn, PhD and her lab at Duke University have conducted extensive research that uses machine learning and statistics to extract valuable insights from consumer wearables, but the work remains challenging. Even highly targeted machine learning applications, such as arrhythmia detection platforms developed by companies like AliveCor, still operate with non-trivial false-positive rates. Wrapping a general-purpose LLM around wearable data without similarly rigorous modeling layers is unlikely to deliver clinically reliable outputs.

Security and Privacy Considerations

As consumer AI health tools evolve, security becomes increasingly important. Anyone who uses ChatGPT, particularly those who are sharing sensitive health information, should enable multi-factor authentication (MFA), which is one of the most effective controls for reducing account compromise risk.

Users should also recognize an important regulatory distinction. Information that is entered into consumer AI services is generally not protected under HIPAA. OpenAI’s enterprise offering, ChatGPT for Healthcare, is designed for HIPAA-covered environments and supports Business Associate Agreements (BAAs), but consumer versions operate under different legal frameworks.

The Takeaway for Health IT Leaders

The lesson from Fowler’s experience is not that consumer health AI lacks value, but that context, governance, and clinical integration matter. Non-deterministic systems that interpret noisy consumer data can easily generate variable outputs that users may misunderstand as clinical conclusions rather than probabilistic insights.

For health systems, payers, and digital health innovators, the near-term opportunity lies in combining LLM interfaces with validated predictive models, strong clinical workflow integration, and transparent communication about uncertainty. Without those guardrails, even well-intentioned consumer health AI tools risk creating confusion rather than clarity.

Readers Write: Doing Everything For the Patient, Not To the Patient

February 23, 2026 Readers Write Comments Off on Readers Write: Doing Everything For the Patient, Not To the Patient

Doing Everything For the Patient, Not To the Patient
By Nassib Chamoun

Nassib Chamoun, MS is founder, president, and CEO of Health Data Analytics Institute.

image

“Do as much as possible for the patient and as little as possible to the patient.”

That single sentence, written by Bernard Lown, MD in “The Lost Art of Healing,” should serve as a universal guide to thinking about medicine, caregiving, and what it truly means to heal. Dr. Lown was my mentor beginning in my early 20s and remained a close friend until his death in 2021 at age 99, He was decades ahead of his time. He believed that medicine should integrate scientific rigor with moral imagination, and that clinical excellence without compassion is incomplete care.

Today, his words feel less like a reflection and more like a challenge. Our population is aging rapidly. Older adults are the fastest-growing consumers of healthcare services.

As more patients approach the later stages of life, the central question facing clinicians, health systems, and policymakers is not whether we can do more, but rather if doing more truly serves the patient. Increasingly, the evidence suggests that quality of life, not simply quantity of life, must be the defining outcome.

This is not a new conversation. In 1974, Balfour Mount, MD, who is widely regarded as the father of palliative care in North America, established the first hospital-based palliative care unit at Montreal’s Royal Victoria Hospital. Since then, the field has grown steadily. Decades of research demonstrate improvements in symptom control, patient and family satisfaction, alignment of care with patient goals, and, in many cases, lower healthcare utilization and costs.

More recently, the World Health Organization issued a call-to-action urging health systems to expand palliative care access. Not only for humanitarian reasons, but also as a sustainable response to the use of our healthcare resources.

Organizations such as the Center to Advance Palliative Care (CAPC) have worked to standardize best practices and train clinicians to deliver high-quality, interdisciplinary palliative care across settings. Leading physician researchers and ethicists have published extensively in peer-reviewed journals, academic texts, and mainstream media.

Despite this robust evidence base, many patients and families still experience end-of-life care as a stark binary: aggressive inpatient interventions on one side, or hospice and “giving up” on the other. Why does this false choice persist?

For me, this question is no longer theoretical. It is deeply personal. As my parents age, I have watched them navigate serious illness, both at home and in the hospital. Again and again, I have seen a system that is reflexively oriented toward intervention — more procedures, more monitoring, and more escalation.

The intent is usually good. But too often the outcome is suffering, including physical discomfort, emotional distress, and a loss of agency at precisely the moment when patients need it most. Where is palliative care in these situations?

End-of-life care should not be an either-or proposition. It should not require patients to choose between life-prolonging treatment that may diminish quality of life or dying at home without support.

Palliative care belongs alongside disease-directed treatment, especially during hospitalizations, where it can provide expert symptom management, clarify goals of care, support families, and guide thoughtful transitions home when appropriate.

I have seen the power of this model first hand. Palliative-focused hospitalizations can be transformative, not only for patients who experience relief from pain and fear, but also for caregivers who gain reassurance, guidance, and partnership. This approach preserves dignity, respects patient values, expands hospital capacity and access, and makes more responsible use of limited healthcare resources. Most importantly, it restores humanity to care.

For me, the conclusion is clear. When possible, our loved ones should not die in hospitals. They also should not have to forgo care, comfort, or hope.

To palliative care clinicians, healthcare leaders, policymakers, advocates, and anyone who has walked this path with someone they love, let us build a healthcare system that truly does everything for the patient, not to the patient. Compassion and evidence are not competing priorities. Together, they form the highest standard of care.

Readers Write: What a Modern Application Managed Services Model Should Deliver

February 23, 2026 Readers Write Comments Off on Readers Write: What a Modern Application Managed Services Model Should Deliver

What a Modern Application Managed Services Model Should Deliver
By Scott Gildea

Scott Gildea, MBA is EVP of client delivery for Optimum Healthcare IT.

image

For years, application managed services in healthcare has been treated as a singular staffing solution. When teams were short-handed or roles went unfilled, organizations added overseas resources to keep systems running. That approach worked until the environment changed.

Today’s healthcare landscape is more complex than ever. EHRs, ERPs, and enterprise platforms are deeply connected to patient care, revenue, and operations. Downtime is no longer just an inconvenience, it is a risk. At the same time, IT teams are burned out and being asked to support transformation while maintaining stability.

In this environment, application managed services cannot be about coverage alone. They must deliver accountability, consistency, and operational confidence.

This is the Moment for Application Managed Services

As a whole, healthcare organizations are at a dramatic inflection point in healthcare IT. Some of the biggest reasons for this include:

  • Mounting pressure surrounding increasing costs, stagnant budgets. and fluctuating reimbursement rates.
  • Socioeconomic pressures, such as increasing prices.
  • Downward pressure from health system executives to be more efficient and forward-thinking.

Application managed services must keep pace with the expedited evolution of technology in healthcare. Change is here for most organizations, whether it takes the shape of AI, the mergers and acquisitions, or the increasing socioeconomic pressures. 

Health systems are no longer asking whether they need managed services. They are asking which models will actually support their organizations over the long term. The answer lies in delivery models that are built specifically for healthcare, designed for accountability, and focused on the people who keep these systems running every day.

What a Modern Application Managed Services Model Should Deliver

Health systems are not looking for another vendor. They are looking for a delivery model that they can rely on every day, not just during go-lives or major initiatives. Traditional approaches often fall short.

What organizations need now is a managed services model that is explicitly built for healthcare enterprise applications, operates as a valid extension of the internal team. and has clear ownership and shared accountability.

A modern application managed services solution should answer a few basic questions:

  • Who owns the day-to-day operations?
  • How are issues identified before they become incidents?
  • How is performance measured and improved over time?
  • How does the model scale without disrupting internal teams?
  • Will this allow us to keep up with the ever-changing landscape of health IT, including EHR updates, AI advancements, and more?

When managed services are designed well, they reduce operational noise. Leaders spend less time reacting and more time planning. Internal teams stay focused on strategy and improvement instead of constant firefighting. That does not happen by accident. It requires healthcare-specific experience, disciplined delivery, and a model that is built for complex enterprise environments.

Readers Write: Medicare Goes All In on Value-Based Care

February 16, 2026 Readers Write Comments Off on Readers Write: Medicare Goes All In on Value-Based Care

Medicare Goes All In on Value-Based Care
By Eugene Gonsiorek, PhD

Eugene Gonsiorek, PhD is VP of clinical regulatory standards for PointClickCare.

image

If there were any doubts about Medicare’s commitment to value-based care, there shouldn’t be any longer.

Abandoning its former model of rolling out value-based care (VBC) programs one at a time, the Centers for Medicare and Medicaid Services (CMS) between March and December 2025 announced nine new or proposed programs and modifications to five existing programs – an unprecedented pace.

The rush of new programs and the concentrated timing is CMS announcing it is aligning Medicare around VBC to a greater degree than ever before. This is good news for organizations that have been working toward this end and a prompt for those who haven’t made as much progress.

The New Medicare Programs

Let’s take a closer look at the new and proposed programs. 

  • ACCESS (Advancing Chronic Care with Effective, Scalable Solutions). A voluntary, 10-year model testing outcome-aligned payments for measurable clinical improvements using technology-supported care for chronic conditions such as hypertension, diabetes, musculoskeletal pain, and behavioral health.
  • WISeR (Wasteful and Inappropriate Service Reduction). Launched in mid-2025, this model tests ways to reduce unnecessary services and accelerate prior authorization while safeguarding patients and taxpayers against low-value care.
  • GUARD (Global/Universal Accountability in Drug Pricing) and GLOBE (Global Outcomes in Benchmarking and Equity). Proposed mandatory models that aim to test international benchmark-based adjustments to Medicare Part D and Part B drug rebate and pricing systems to help address high drug costs.
  • Ambulatory Specialty Model (ASM). Finalized as a mandatory model beginning in 2027 that holds certain specialists accountable for quality, cost, and care coordination outcomes.
  • LEAD (Long-term Enhanced ACO Design). Announced as the next generation of accountable care organization models, a 10-year design intended to better support small, independent, and rural providers following ACO REACH (Accountable Care Organization Realizing Equity, Access, and Community Health).
  • BALANCE (Better Approaches to Lifestyle & Nutrition). Announced alongside GUARD and GLOBE, this voluntary model is intended to align manufacturers, state Medicaid agencies, and Part D plans to improve metabolic health through GLP-1 access plus lifestyle support, with testing concluded by 2031.

Across these models, several common design features stand out. Time horizons are longer, often extending eight to 10 years. Payment is increasingly tied to measurable outcomes rather than process compliance. Accountability extends beyond primary care into specialty care and pharmaceuticals. In select areas, CMS is requiring mandatory participation to achieve broad system impact.

The ACCESS model illustrates how CMS expectations are evolving. A voluntary 10-year initiative, ACCESS ties payment to demonstrable improvement in chronic conditions such as hypertension, diabetes, musculoskeletal pain, and behavioral health. The focus is no longer service volume or short-term utilization metrics, but sustained clinical outcomes.

Similarly, the WISeR model reframes inappropriate utilization as both a quality failure and a fiscal risk. By targeting low-value services and streamlining prior authorization, WISeR signals CMS’s growing willingness to intervene earlier in care decisions. The goal is not simply to manage spending after it occurs, but to prevent waste before it happens.

Together, these models reflect a clear shift from utilization-based proxies toward explicit accountability for results.

Specialty Care and Pharmaceuticals Move to the Center

Perhaps the clearest departure from earlier value-based care efforts is CMS’s expansion of accountability into specialty care and drug pricing, areas historically insulated from performance-based payment.

The finalized ASM, set to begin in 2027, makes participation mandatory for selected specialists and holds them accountable for quality, total cost of care, and care coordination. This challenges the long-held assumption that VBC is fundamentally a primary care endeavor. It also elevates downstream utilization, including post-acute care, from a secondary concern to a central performance variable.

At the same time, the proposed GUARD and GLOBE models are CMS’s most direct effort to apply value-based principles to pharmaceutical spending. By testing international benchmarking approaches in Medicare Parts B and D, CMS is extending accountability into pricing structures that have traditionally been governed by statute rather than performance expectations.

Long-Term Accountable Care and Prevention as Structural Bets

The LEAD model underscores CMS’s recognition that accountable care requires stability, not churn. By extending participation horizons to 10 years and focusing on small, independent, and rural providers, LEAD acknowledges that organizational transformation and sustained downside risk cannot be achieved on short timelines.

In parallel, the BALANCE model reflects CMS’s growing emphasis on prevention and upstream investment. By aligning manufacturers, state Medicaid agencies, and Part D plans around GLP-1 access combined with lifestyle and nutrition support, BALANCE tests whether earlier intervention in metabolic disease can produce durable improvements in outcomes and spending. By pairing pharmaceutical access with behavioral support, CMS is testing integrated solutions rather than isolated interventions.

The Effects on Patients and Providers

These models collectively raise the bar for providers. Financial accountability is more robust. Timelines are longer. Expectations for care coordination and performance improvement are higher. Independent practices, rural providers, and specialists, groups historically less exposed to mandatory value-based arrangements, are now central to CMS’s policy design.

For patients, CMS’s stated objectives are clear: earlier intervention, fewer unnecessary services, better chronic disease control, and lower drug costs. Whether these outcomes are realized will depend less on policy intent than on execution, particularly provider engagement and the ability to manage care across settings.

From Experimentation to System Design

Taken together, the new model announcements signal that CMS is moving beyond experimentation toward system design. The concentration of releases, the expanded mandatory participation, and the consistent emphasis on outcomes and cost containment all point in the same direction.

CMS is no longer asking whether VBC works. It is redesigning Medicare on the assumption that it must.

As these models move from proposal to implementation, they will shape payment policy, care delivery structures, and provider participation in Medicare well into the next decade. Organizations should prepare themselves for a system in which value-based accountability is no longer optional, but the norm.

Readers Write: Open Access in Healthcare: What TEFCA Got Right, Where It’s Stuck, and What Comes Next

February 16, 2026 Readers Write Comments Off on Readers Write: Open Access in Healthcare: What TEFCA Got Right, Where It’s Stuck, and What Comes Next

Open Access in Healthcare: What TEFCA Got Right, Where It’s Stuck, and What Comes Next
By Robin Monks

Robin Monks is CTO at Praia Health.

image

If you’ve ever moved to a new city and tried to get your medical records transferred to a new provider, you already understand the problem that the Trusted Exchange Framework and Common Agreement (TEFCA) is trying to solve. In theory, health data should follow you. In practice, it often doesn’t.

TEFCA is the federal government’s most ambitious attempt to date at fixing nationwide health information exchange. Mandated by the 21st Century Cures Act and formally launched in late 2023 when the first Qualified Health Information Networks (QHINs) were designated, TEFCA aims to be the “interstate highway system” for health data, allowing providers, payers, and patients to share information regardless of which network they are on.

After two years of operation, there’s a lot to like about what TEFCA has accomplished. More than 70,000 healthcare locations are now connected through TEFCA, and Epic reported that 1,000 hospital customers have transitioned to TEFCA. Carequality, a framework connecting over 50 networks, 600,000 care providers, and 4,200 hospitals, is actively aligning its policies with TEFCA.

The framework has also expanded beyond its initial treatment-focused exchange. TEFCA now supports data exchange for payment, healthcare operations, government benefits determination, individual access, and public health purposes.

Perhaps most importantly, TEFCA is creating a universal floor for interoperability. Before TEFCA, a health system that wanted to exchange data nationally had to join multiple networks and maintain dozens of point-to-point connections. TEFCA simplifies that into a single participation model. For smaller practices and rural hospitals that couldn’t afford the overhead of managing multiple network memberships, this is a meaningful reduction in cost and complexity.

But TEFCA’s scale means that providers are now responding to queries from organizations they’ve never interacted with before. When a requester says they’re querying for treatment purposes and the responder disagrees that the request qualifies as “treatment” under HIPAA, you get what the ASTP calls an “information exchange impasse.”

This lack of trust means that providers are easily talked into not automatically replying to TEFCA requests, even to an individual access request with a verified identity attached. Information blocking remains a persistent and thorny issue. TEFCA participants who interfere with QHIN choice now risk violating the federal information blocking rule, with potential Medicare payment disincentives, but the cultural shift from “default deny” to “default share” is slow.

Then there’s the FHIR question. TEFCA launched using IHE-based document exchange, a 1990s-era architecture that predates smartphones and modern web standards. This was a pragmatic choice to minimize disruption and build on the existing exchange infrastructure (IHE-based exchange still represents enormous transaction volume annually).

But it means that the initial TEFCA experience is document-centric, returning C-CDA documents rather than discrete, FHIR-based data. The HTI-5 proposed rule from December 2025 signals a strong push toward FHIR-based APIs, but the gap between where TEFCA is today and where modern application developers need it to be is real. Companies that build on FHIR and OIDC are watching this closely.

The regulatory environment is also in flux. That same HTI-5 proposed rule would remove the TEFCA manner exception, a provision that allowed TEFCA participants to limit data exchange to only through TEFCA. The administration is signaling that using information blocking exceptions to incentivize TEFCA participation may be unnecessary, which is an interesting stance that simultaneously shows confidence in TEFCA’s trajectory and a desire to not disadvantage non-TEFCA exchange networks.

TEFCA has achieved enough adoption to be taken seriously, but not enough to be taken for granted. Here’s what needs to happen for it to reach its potential:

  • FHIR needs to be a first-class citizen, not a roadmap item. The healthcare technology ecosystem has moved to FHIR. App developers, patient-facing platforms, and clinical decision support tools all expect FHIR APIs. Until TEFCA’s QHIN-to-QHIN exchange natively supports FHIR alongside IHE, there will be a gap between what TEFCA enables at the network level and what the market needs at the application level.
  • Trust needs to be engineered, not assumed. The interpretive disagreements around treatment definitions and provider qualifications aren’t going to resolve themselves through goodwill alone. TEFCA’s governance needs to produce clear, specific guidance that participating organizations can implement without extensive legal review. The SOP updates from January 2026 are a step in the right direction, but there’s more work to be done.
  • Patient transparency and choice must be central. Individual Access Services (IAS), the mechanism by which patients can access their own data through TEFCA, is likely to be one of the fastest-growing use cases. The patient access market is forecast to reach $4.16 billion by 2032. But IAS also carries the highest risk of information blocking complaints, because patients have the right to choose any IAS provider, regardless of their provider’s QHIN. Making this work requires a level of patient-facing transparency that healthcare hasn’t historically been great at. We also need to expand this to not only reading data, but performing actions with target EHRs.
  • Enforcement has to be real. TEFCA operated for its first year as an entirely voluntary framework. The increasing enforcement posture around information blocking and the integration of TEFCA obligations into Medicare compliance programs is changing the calculus. But voluntary frameworks succeed when the incentives to participate outweigh the friction. Right now, the friction is still high for many organizations, particularly smaller ones. Last year we were promised that we would start seeing strict enforcement on information blocking, but so far we’re not seeing examples of enforcement from CMS.

TEFCA is doing something genuinely important. It is establishing the principle that health data should be exchangeable at a national scale, with a common set of rules, as a baseline expectation rather than a special achievement. For health systems that are thinking about their consumer experience strategy, and all should be, the ability to access data from across a patient’s entire care journey is critical.

The dream of open access in healthcare is within reach, but getting from good-intentioned definitions to it running and working where patients need is slow.

Readers Write: AI in Healthcare Revenue Cycle: Linking Automation to Financial Stability

February 16, 2026 Readers Write Comments Off on Readers Write: AI in Healthcare Revenue Cycle: Linking Automation to Financial Stability

AI in Healthcare Revenue Cycle: Linking Automation to Financial Stability
By Inger Sivanthi

Inger Sivanthi, MBA is founder and CEO of Droidal.

image

Five or six years ago, revenue cycle performance was discussed mostly in operational terms. Leaders reviewed denial rates, days in accounts receivable, and staffing productivity. If those indicators were steady, the assumption was that the organization was financially sound. The work was seen as administrative execution rather than financial strategy.

That framing feels incomplete now. Reimbursement patterns have become less predictable. Payer interpretations vary, even within the same plan category. Documentation standards evolve quietly, and what cleared last quarter may stall this quarter. Nothing feels catastrophic, yet the margin for error has narrowed.

When timing becomes inconsistent, finance feels it quickly. Forecast models widen. Cash flow conversations become more cautious. Growth initiatives are evaluated with an extra layer of scrutiny. Revenue cycle management is no longer operating in the background. It is influencing financial confidence.

Automation Solved the Obvious Friction

Healthcare organizations did not stand still over the past decade. Eligibility workflows were automated. Coding tools became more sophisticated. Electronic remittance reduced manual posting errors. These investments improved speed and removed visible inefficiencies.

Yet the deeper issue remained. Denials continued for reasons that were not always procedural. Appeals absorbed experienced staff time. Forecasting models leaned on historical trends that assumed payer behavior would remain relatively stable. That assumption is harder to defend today.

Automation follows instructions. It does not interpret shifts. It executes rules consistently, but does not recognize when those rules are interacting differently in a changing environment.

Earlier Pattern Recognition Is Changing the Dynamic

Artificial intelligence brings a different capability. By reviewing documentation details, coding sequences, authorization timing, and payer response history together, it begins to surface combinations that tend to struggle. Those combinations are not always obvious. They emerge through repetition.

When risk is identified before submission, teams can intervene before delay becomes inevitable. Preventing a denial is financially different from correcting one. The time saved compounds quietly. Over several quarters, even modest improvements in first-pass acceptance begin to influence working capital stability.

The benefit is not perfection. Healthcare reimbursement will never be perfectly predictable. The benefit is fewer unexpected swings and tighter confidence intervals around cash timing.

The Small Variations That Shape Margin

Revenue loss is rarely dramatic. It builds slowly. A modifier applied differently between departments. A service level coded conservatively out of habit. Contract language interpreted with slight variation across facilities. Individually, these instances appear manageable. In aggregate, they influence performance more than most teams realize.

AI systems reviewing documentation and billing data together can detect these repeated inconsistencies more consistently than manual review alone. This does not remove the need for experienced revenue leaders. It simply directs their attention toward areas where exposure is concentrated.

That shift in focus strengthens margin discipline without creating additional administrative layers.

From Reporting History to Informing Strategy

Traditional dashboards tell organizations what has already happened. They summarize billed charges, denials, and collections. That information is necessary, but it is reactive by design. By the time a pattern appears clearly in retrospective reporting, the financial impact has already occurred.

Predictive modeling changes that posture. When internal performance data is combined with payer response behavior, reimbursement timing becomes easier to estimate within a reasonable range. Forecasts still require judgment, but the range narrows. Leadership discussions feel less defensive and more deliberate.

Revenue cycle management begins influencing forward planning rather than simply documenting past outcomes.

Operating Within Real Workforce Limits

Revenue cycle staffing remains tight across the industry. Seasoned revenue professionals are hard to come by. Even when you hire, the ramp-up period slows momentum. For many teams, expanding staff just isn’t practical right now.

Intelligent prioritization helps address this reality. When higher-risk claims surface earlier and larger-dollar exposures are flagged sooner, teams allocate effort more intentionally. The objective is not workforce reduction, but resource precision. Protecting margin increasingly depends on where attention is placed, not simply how many people are assigned.

The Shift Has Been Gradual, Not Dramatic

There was no single moment when artificial intelligence transformed revenue operations. The change has been incremental. Organizations recognized that efficiency alone did not insulate them from variability. Earlier visibility, more focused intervention, and steadier forecasting gradually reshaped how revenue risk is managed.

Healthcare reimbursement will continue to evolve, and complexity will remain part of the system. Artificial intelligence does not remove that complexity. It improves how quickly patterns are recognized and how steadily leadership responds. In that sense, revenue cycle management has moved closer to financial strategy, and predictability has become as valuable as productivity.

Readers Write: Virtual Nursing Thrives When Thoughtful Design Guides Implementation

February 9, 2026 Readers Write Comments Off on Readers Write: Virtual Nursing Thrives When Thoughtful Design Guides Implementation

Virtual Nursing Thrives When Thoughtful Design Guides Implementation
By Christine Gall, RN, DrPH

Christine Gall, RN, DrPH, MS is chief nursing officer of Collette Health.

image

Virtual nursing has quickly evolved as a force multiplier that is capable of addressing top pain points that are impacting care delivery, operations, quality, and patient experience. But as more health systems explore this model, outcomes have varied widely. Some organizations report measurable improvements in documentation time, throughput, retention, and workload relief. Others struggle to see benefits or encounter frustration at the bedside.

The difference rarely comes down to technology alone. It comes down to design. Successful virtual nursing programs begin with clear-eyed assessment. What problem are we trying to solve first? Throughput congestion? Night shift support? Documentation burden? The strongest programs anchor the initial design to a significant operational issue that is specific, measurable, achievable, and relevant.

Of equal impact is the identification of a leader and team that are ready for the responsibility of substantial workflow redesign. Virtual nursing models are more likely to succeed and scale when both factors are addressed and when the initial focus is narrow and well defined, setting up an iterative strategy that supports program expansion and scale over time.

Virtual nursing is also capable of delivering powerful, longer-term benefits like improved staff resilience and nurse retention, but those gains require longer timeframes to see improvement. Programs that try to solve multiple issues initially at launch often struggle, while those that sequence thoughtfully and use data-driven rapid cycle improvement to continually monitor success and iterative improvement are better positioned to scale successfully.

When organizations run into difficulty, it generally involves a failure to define attainable goals, a gap in stakeholder perception that creates barriers to acceptance and adoption of new workflows, and/or a failure of the new work processes to address the areas of concern without creating new burdens. In my experience, three design choices consistently determine whether virtual nursing lightens workload or adds friction:

Task Clarity and Workload Optimization

For bedside nurses, the value of virtual nursing is measured in minutes of administrative burden reduced and the expansion of impactful time spent with their patients. Programs succeed when they clearly define which tasks are moving from bedside to virtual roles. That may include time-intensive admission, discharge, and patient education activities, care coordination, and focused clinical oversight. But decisions regarding role and scope of the virtual nurse must be explicit.

When the virtual nurse’s role is not well defined and understood by the entire team, bedside teams experience little relief, and sometimes more duplication. A symptom of poor task clarity is an increase in the need for communication between the virtual and bedside staff. Well-run virtual nursing initiatives build in automated methods of communication directly into the workflows rather than requiring one-off, manual communication activities. Real value comes from task transfer, not task shadowing.

When programs invest in this level of clarity, bedside nurses increasingly recognize the impact, and barriers to adoption are mitigated.

Workflow Integration, Not Overlay

Many early virtual nursing implementations struggled because the virtual workflows were created as parallel processes rather than developing novel, integrated workflows. If virtual nurses document in separate systems, communicate through separate channels, or escalate through ad-hoc pathways, the bedside becomes the bridge between worlds, an experience that likely creates additional burden.

Integration, by contrast, means shared communication pathways, aligned documentation practices, clear escalation rules, and participation in unit workflows rather than operating in parallel but separate processes. When virtual nurses are embedded operationally, lines of workflow delineation are crisp and do not create new burdens for communication, coordination, or clarification.

Shared Governance and Co-Design with the Bedside

Virtual nursing is as much a cultural change as an operational one. How it is introduced matters. When bedside nurses are asked to adopt a model that they did not help shape, skepticism is a rational response. The programs that thrive invest in shared governance, inviting bedside teams into discussions and decisions about workflow redesign, task allocation, communication norms, and measurement. This transparent approach may not only produce more realistic workflows, but can also establish trust between virtual and bedside roles from the start.

Trust and shared responsibility for iteratively creating a robust care delivery model is the foundation for program stability, refinement, and scale. Connecting leaders and teams with the “what” and “why” before defining “how” a virtual care program will evolve is crucial to buy-in, acceptance, adoption, and ultimately ownership of the new processes.

Virtual Nursing as a Near-Term Workforce Solution

Unlike conventional software deployment, the success of virtual nursing cannot be measured by technical readiness alone. Integrations, reliability, and usability matter, but they are only one part of the equation.

Virtual nursing changes how work is distributed, how handoffs occur, and how clinicians collaborate. It is a care model that is built on an agile technology platform, not a rigid technology solution in search of a problem to solve. Successful virtual care models mature through continuous evaluation of outcomes and success metrics, data-driven iteration, and widespread dissemination of shared learnings.

It may be easy to forget that the workflows, staffing models, and best practices we consider routine took years to stabilize. This is an important perspective to remember as virtual nursing practice and integrations evolve. The nursing workforce has carried extraordinary strain for more than a decade, and many traditional solutions focus on long-horizon strategies, such as expanding education pipelines, addressing retention, or modernizing licensure. Those efforts matter, but will also require the full redesign of the model of clinical care delivery to effectively address the looming issues of the day.

Virtual nursing is one of the most promising and actionable models that can reduce burden, increase capacity, and improve care in the near future, provided the foundational elements are fully embraced and executed. If we allow early friction and avoidable barriers to eclipse that potential, we risk discarding an approach that could meaningfully support nurses when eloquent solutions are urgently needed.

The opportunity is not merely to deploy technology, but to build a sustainable clinical workforce that is properly resourced and supported to deliver world-class care and elevate the patient experience of care.

Readers Write: Healthcare Needs a Data Liquidity Disruption

February 9, 2026 Readers Write 4 Comments

Healthcare Needs a Data Liquidity Disruption
By Sriram Devarakonda

Sriram Devarakonda, MSEE is co-founder and CTO of Cardamom.

image

Healthcare has long promised that data would transform research, precision medicine, and patient outcomes. Yet progress remains painfully slow. Data silos and fear-driven restrictions keep critical information trapped in systems that were designed more to contain than to share.

Real transformation in targeted care, population health, and clinical research won’t come from yet another interoperability initiative or API. It requires a more fundamental shift: a data liquidity disruption that treats data as something meant to move, not sit still.

What’s holding healthcare back?

Healthcare’s challenges have evolved dramatically over the past three decades, and they will continue to change just as profoundly in the decade ahead. Thirty years ago, the priority was basic connectivity: enabling continuity of care across disparate systems through point-to-point integrations, with HL7 playing a foundational role.

Ten years ago, the rise of web and mobile technologies demanded a modernized approach to interoperability, giving rise to newer API-based standards, such as FHIR, that enabled digital health innovation.

Today, and looking forward, the focus has shifted yet again. Healthcare’s most pressing challenges, from cancer to diabetes to Alzheimer’s, require the effective use of data and AI at scale, challenges that impact millions of lives and drive national healthcare costs. Solving them demands more than messaging standards alone. Our future cannot depend on HL7 and FHIR by themselves. It requires true data liquidity, real-time intelligence, and platforms that are designed for learning health systems.

Before we delve into how we prepare for the future, we should look at a few reasons that data liquidity is a challenge today.

  • Proprietary mindsets. Healthcare systems and vendors have long viewed data as an asset to guard, not a resource to share. Competitive, contractual, and legal anxieties create barriers that go beyond technology. They are cultural and structural.
  • Fragmented data standards. Despite progress with HL7 and FHIR frameworks, true standardization remains elusive. Data formats, definitions, and governance models still vary widely, making even “standard” exchanges complex and time-consuming to implement.
  • Privacy and compliance fears. With HIPAA, GDPR, and a growing patchwork of state regulations, organizations often err on the side of caution. The result is a compliance-first posture that, while understandable, often stifles innovation and progress.
  • Legacy infrastructure. Many health systems are still operating on decades-old IT foundations that were designed for billing and clinical care, not for modern data exchange. Retrofitting these systems to support real-time data liquidity is costly and complex.
  • Sheer complexity of technologies. A large barrier to progress is the sheer number of different technology systems even within the same ecosystem. EHRs, ERPs, and countless vendor-managed applications add an additional layer of complexity that’s challenging to overcome.

Why a disruption is inevitable and necessary

Healthcare’s approach to data is slowing progress. Patients want connected experiences, researchers need faster access to data, and providers and payers are under pressure to deliver better outcomes.

Other industries already allow data to flow securely in real time, enabling smarter decisions and personalization. Healthcare must make the same shift, from owning data to stewarding it, and from locking it away to sharing it responsibly. Those who adapt will lead; those who don’t will fall behind.

Preparing for the data liquidity era

How can healthcare organizations prepare for the inevitable disruption?

  • Invest in platforms, not point solutions. Healthcare systems must invest in modular, cloud-based platforms that allow for data to move freely and securely. That means creating enterprise-shared data access on modern data platforms that can evolve alongside transactional systems that are not frozen in time.
  • Embrace interoperability as a strategy, not a checkbox. Compliance-driven interoperability creates connections, not capability. Treating data sharing as a strategic asset is what turns exchange into impact, fueling innovation, partnerships, and better care coordination.
  • Move from data control to data accountability. As data moves more freely, data maturity becomes even more critical. Clear standards for data quality, consent, and usage help ensure that liquidity doesn’t come at the expense of privacy or ethics. AI has a large role to play here when it comes to interpretation and standardization.
  • Standardize clinical workflows. The more healthcare organizations can standardize their clinical workflows and protocols now, the fewer challenges they will have later. Clear, consistent processes make it easier to adopt new tools, train staff, and share data safely.
  • Align data strategy to business and clinical outcomes. Data liquidity drives real, downstream impact on both business and clinical outcomes. When tied to clear, measurable goals, such as reducing denials, accelerating clinical trial enrollment, or improving patient throughput, it becomes a powerful, provable source of ROI.
  • Reimagine the patient’s role. Patients are no longer passive data points; they are active and willing participants. Giving them control over their health data and the ability to share it across providers, researchers, and care teams will accelerate innovation while fostering transparency, trust, and improved outcomes.

The ripple effects of data liquidity

When healthcare achieves true data liquidity, the impact will be profound. Researchers will be able to identify patterns across populations in days, not years. Providers can make more informed decisions at the point of care. Health systems will predict and prevent crises before they occur. Most importantly, patients will benefit from a system that understands them as whole individuals, not just episodes of care that are scattered across disconnected databases.

Healthcare is long overdue for the same data transformation other industries have already embraced, one that allows data to move freely, connect seamlessly, and create value wherever it goes.

The road to disruption won’t be easy, but it is necessary. The barriers to data movement have been standing for too long and the cost of inaction is too high.

Readers Write: Why Patient Wait Times Still Define the Clinic Experience in 2026

February 2, 2026 Readers Write 1 Comment

Why Patient Wait Times Still Define the Clinic Experience in 2026
By Inger Sivanthi

Inger Sivanthi, MBA is CEO at Droidal.

image

Outpatient clinics in 2026 look different from those of a decade ago. Scheduling is online. Records are electronic. Patient portals are standard. Most organizations have already spent the money that was required to modernize access.

Long patient wait times have not disappeared. Waiting rooms still fill early. Appointment times slip before the morning is half over. Front desk staff often begin the day responding to issues rather than managing a steady flow. This happens even when staffing levels are reasonable and schedules appear balanced.

When delays show up this early, technology is rarely the cause. The problem usually lies in how the day begins.

Discussions about wait times often focus on staffing gaps, provider availability, or late arrivals. Those explanations only go so far. In many clinics, the bigger issue is incomplete preparation that spills into the first hours of the day.

Much of the information required for a visit is not fully settled when patients arrive. Demographic details are outdated. Insurance coverage has changed. Required documentation is often left unresolved. The issues show up at the front desk, not in reports.

The front desk absorbs the impact of this unfinished work. Questions that should have been resolved earlier get handled under time pressure. Small corrections stack up. By mid-morning, the schedule is already off course.

Digital intake has reduced paperwork, but it has not changed the timing of the work. Patients may submit forms ahead of time, yet staff still need to review, verify, and correct information close to arrival. Insurance questions require follow-up. Consents must be confirmed. Records must align before a visit can proceed smoothly.

Attempts to improve wait times often focus on making check-in faster. More kiosks are installed. Workflows are tightened. Tasks are automated where possible. These steps improve efficiency, but the constraint remains. As long as preparation is concentrated at the start of the visit, the front desk stays under pressure.

Some organizations now treat intake as work that should be largely completed before the patient enters the clinic. When information is settled earlier, the start of the day becomes more stable and less reactive.

To help with earlier preparation, some clinics use pre-visit review tools that scan intake information before the appointment. Missing data, coverage discrepancies, and unresolved items are flagged while staff still have time to respond. Problems that would otherwise surface at the front desk are handled earlier, when schedules are not yet under strain.

These systems do not replace staff judgment. They point attention to likely trouble spots so issues can be resolved before patient flow is affected. Moving this work earlier reduces the amount of recovery required once the clinic is busy.

Check-in becomes steadier. Front desk staff spend less time resolving avoidable issues. Schedules hold closer to plan across the morning. Patients spend less time waiting because fewer problems reach the front of the workflow.

There is concern that completing intake earlier removes personal interaction. Staff often report the opposite. When documentation and coverage issues are addressed ahead of time, conversations at check-in are calmer and less rushed. Visits begin with clearer expectations.

Patient wait times persist in 2026 because too much essential work still occurs at the moment of arrival. Clinics that complete preparation earlier and use pre-visit review selectively tend to operate with greater stability. The difference shows up in a day that runs closer to plan.

Readers Write: Killing the Clipboard: Cloud Fax is the Bridge to Patient-Centric Data Access

January 28, 2026 Readers Write Comments Off on Readers Write: Killing the Clipboard: Cloud Fax is the Bridge to Patient-Centric Data Access

Killing the Clipboard: Cloud Fax is the Bridge to Patient-Centric Data Access
By Bevey Miner

Bevey Miner is a healthcare strategist at eFax, a Consensus Cloud Solutions brand.

image

The Trump Administration’s renewed focus on interoperability has reignited the long-standing calls for healthcare to “Kill the Clipboard.” This movement aims to eliminate the administrative burden and data silos that are caused by paper-based processes, allowing for near-instant access to searchable, actionable patient information.

The industry broadly supports modernization efforts, with patient access at the forefront. But we need to ensure that this digital transformation doesn’t leave small, rural, and under-resourced communities behind.

The paper problem: why change takes time

We cannot wait for every provider to achieve a perfect, fully digital state before we start delivering on the promise of interoperability. Patients must have access to their data now, even if parts of the industry are still using clipboards and paper fax.

With the federal initiative to bolster near-instant patient access to their health records, along with real-time patient data accessible for providers to dramatically speed care coordination, paper records that are transmitted over outdated fax machines don’t support and often impede the ability to reach this goal. The administration is leaning heavily on data networks and vendors to streamline the transmission of information between healthcare providers while modernizing standards with FHIR APIs.

Conceptually, the future we are all working towards is faster data access, searchable and actionable information to improve care, and seamless communication between care teams. This idealized future state fails to account for the practical limitations that are facing many foundational healthcare organizations. 

Twenty-nine percent of providers report that they lack the financial resources that are needed to deploy the advanced digital infrastructures that are required by today’s interoperability vision.

Many organizations, like rural and smaller post-acute care settings, are still playing catch-up since they were excluded from incentives that accompanied the HITECH act of 2009. While some of these organizations may have an EHR, it may be outdated and not certified. Additionally, it’s not uncommon to find others working with scrappier, home-grown solutions, or even resorting to paper-based and manual processes.

But while these smaller organizations might not have million-dollar EHR platforms, they do have paper fax. In order for healthcare organizations of all sizes to participate in the move to “Kill the Clipboard,” they are turning to digital cloud fax.

Cloud fax: healthcare’s guilty pleasure

A recent survey found that 46% of healthcare facilities still use paper fax to send and receive patient data. If the healthcare industry is so dedicated to moving past paper, why do these archaic systems persist?

The simple answer is that, while we are attempting to replace the paper fax machine with a structured data format like FHIR, we still need the next level of communication maturity: cloud fax. Once a fax becomes digital, additional data-sharing capabilities become possible. 

Cloud fax offers all the benefits of paper fax and is much more efficient. It is particularly easy to use and can be fully integrated into other applications via APIs. For decades, it has served as the standard method for document and digital data transmission in healthcare because it checks many boxes. It meets HIPAA and HITRUST standards and is universally compatible with other systems that operate in silos.

Simply put, cloud fax is the most common and accessible form of send and receive communication in our industry. Calls to prevent its ubiquitous use demonstrate a fundamental unawareness of current operational realities and the power of digital transformation to modernize and integrate cloud fax, rather than simply eliminate it.

Send, receive, find: AI-powered digital cloud fax goes the extra mile

Digital cloud fax provides robust send and receive capabilities, but to meet the CMS definition of interoperability, “find” is another key component. To find information, the data must be discoverable. New AI capabilities are helping fax go the extra mile, transforming traditionally unstructured, static documents into structured, actionable insights using intelligent data extraction. This is critical to advancing interoperability since as much as 80% of healthcare data remains unstructured.

Innovations in machine learning and LLMs enable unstructured data from digital faxes, scanned images, TIFFs, and other PDFs to be extracted from nearly any type of health document, including intake content, claims, handwritten forms, and more, and place it directly into a structured system like an EHR or a payer workflow. When these AI tools are built on digital cloud fax platforms to start, they are already leveraging a technology that most healthcare organizations have in place. Implementation is significantly easier and less time-consuming than adding an entirely new system to an organization’s already overloaded and fragmented tech stack.

Delivering superior reliability and security, intelligent digital cloud fax acts as a connector between various types of data files and formats, sharing both structured and unstructured data between healthcare organizations that span various levels of digital sophistication.

Time to face the fax

For many healthcare organizations, digital cloud fax isn’t a roadblock, but an accelerator, enabling them to keep up with more tech-savvy counterparts without the heavy investment in rip and replace technology. It also supports the ongoing FHIR mandates and regulatory changes impacting providers at every level.

By recognizing digital cloud fax as a necessary part of day-to-day operations, as it is at most healthcare organizations, we can better understand how this tool can help us reach interoperability faster, while facilitating the digital transformation of as many organizations as possible.

Healthcare’s reliance on digital cloud fax should not be treated as a guilty secret. Instead, it’s an equalizer and an opportunity. Once we realize its full potential, interoperability initiatives will be more achievable and successful than ever.

Text Ads


RECENT COMMENTS

  1. Weird that Google can acknowledge its crowdsourced medical advice was wrong, but escape penalties for doing it wantonly at scale.

  2. I'm a little curious about the possibility of this lawsuit having larger political rammifications. Texas and its AG have used…

  3. Re: Fischman v. Epic Systems Corporation Seems to me there's already a story to tell here. The case appears to…

  4. My theory is that Gallit is using the EHR case to try and get discovery to refile the Texas Health…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

RSS Industry Events

  • An error has occurred, which probably means the feed is down. Try again later.

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.