Artificial Intelligence and Employer Health Plans: Risks, Rules, and Readiness

Artificial intelligence (AI) is no longer a futuristic concept—it is a present-day force reshaping healthcare delivery in the United States. From triaging symptoms and analyzing radiology scans to adjudicating claims and predicting chronic disease progression, AI is being embedded into nearly every aspect of healthcare. Yet, while the technology races ahead, the legal and regulatory frameworks that govern AI’s use in clinical settings remain incomplete and ambiguous. Most AI solutions operate under legacy categories like Software as a Medical Device (SaMD), or in some cases, fall entirely outside of formal oversight (1).

For self-funded employers—who bear the direct financial risk for their employees’ healthcare—this creates a powerful paradox: AI holds the promise of efficiency, cost reduction, and better outcomes, but also introduces uncertainty, liability exposure, and risks to data privacy, clinical quality, and equity. Consider this: if an AI-enabled tool denies a necessary service or makes a clinical error that causes harm, who is accountable? Today, there are no definitive answers.

Legislative Momentum: AI Is Being Pushed to the Clinical Frontlines

Recent legislative developments make clear that AI will not remain confined to administrative backends. A growing number of bills propose expanding AI’s authority in clinical decision-making, with potential to significantly alter the healthcare landscape:

H.R.206 and H.R.238 (Healthy Technology Act): These bills propose granting FDA-approved AI systems legal recognition as prescribing practitioners under state law. If enacted, these would mark the first federal endorsement of AI as a direct clinical actor (2, 3).

S.1399 (Health Tech Investment Act): This bill introduces a temporary Medicare payment classification for AI-powered clinical services—providing financial incentive and legitimacy for AI use in diagnosis, care management, and more (4).

H.R.7381 (HEALTH AI Act): While not providing clinical authority, this bill allocates funding to the NIH for researching generative AI applications in healthcare, paving the way for future integration (5).

H.R. 1 (One Big Beautiful Bill Act): was initially introduced with a sweeping provision that would have preempted any state or local laws regulating AI systems used in interstate commerce for the next ten years. However, that controversial clause was struck down in the Senate by a bipartisan 99–1 vote before the bill’s final passage. As a result, there is no federal ban on state-level AI regulation (6-7).

This outcome is critically important for employers. It means:

  • States remain free to regulate AI use in healthcare, including how it impacts prior authorizations, coverage decisions, and specialty drug access.

  • Employer plans will now face a patchwork of evolving state rules, some requiring algorithmic transparency, human oversight, or appeal rights for AI-driven denials.

  • Regulatory protections will vary, so employers cannot assume uniform standards across their member populations.

In this regulatory vacuum, contracts—not federal law—remain the primary source of protection for self-funded employers navigating AI-enabled healthcare systems.

Strategic Guidance for Self-Funded Employers

In this fast-evolving environment, employers cannot afford to take a wait-and-see approach. To safeguard fiduciary responsibilities and ensure high-quality care, self-funded plans should act now.

Audit and Map Current AI Usage

Conduct a full inventory of AI applications currently in use across your health benefits ecosystem—including care navigation platforms, claims adjudication systems, PBM algorithms, telehealth triage tools, and utilization management vendors. Identify whether these tools are governed by clinical oversight, and assess how decisions are being made.

Clarify Liability and Risk Ownership

Engage legal counsel to determine where liability sits when AI influences or makes a clinical or coverage decision that results in harm, denial, or inequitable outcomes. Ensure contracts with third-party administrators (TPAs), PBMs, and vendors explicitly define indemnity, audit rights, and AI-related responsibilities. Consider reviewing your organization’s liability insurance and stop-loss protections accordingly.

Monitor Federal and State Policy Developments Proactively

Assign an internal or external expert to track AI-related legislation at both the state and federal levels. While H.R. 1 ultimately did not strip states of their regulatory power, several states are moving independently to introduce healthcare-specific AI regulations. Early awareness will enable contract adjustments before compliance gaps emerge.

Reassess Governance and Oversight Protocols

Establish or strengthen internal governance frameworks that cover AI ethics, bias mitigation, explainability, and patient protection. Require vendors to disclose how their algorithms function, how they are tested for fairness and accuracy, and what recourse exists when errors occur. Due diligence is no longer optional—it is a fiduciary obligation.

AI Will Reshape Not Just How Care Is Delivered—but Who Delivers It

Even without federal preemption, AI is rapidly advancing toward frontline roles in healthcare—influencing diagnoses, triaging care, enabling prior authorizations, and even shaping treatment decisions. In this climate, self-funded employers must lead, not follow, in defining how AI is used across their health plans.

By conducting detailed audits, securing legal protections, updating governance protocols, and staying alert to regulatory trends, employers can balance the promise of innovation with the protection of their members—and the long-term sustainability of their benefits programs.

Contributions by: Alex Petrey, PharmD Candidate, 2026

Strategic Consulting by AxumRx.com

AxumRx.com helps employers navigate AI in prescription benefits, ensuring safe, ethical, and effective use. We tackle risks like bias and privacy, provide guidance on implementation, and empower teams with the knowledge to make informed, member-focused decisions. Learn more.

References

  1. FDA (2024). Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions.

  2. Congress.gov. (2023). H.R.206 – Healthy Technology Act of 2023. https://www.congress.gov/bill/118th-congress/house-bill/206

  3. Congress.gov. (2025). H.R.238 – Healthy Technology Act of 2025. https://www.congress.gov/bill/119th-congress/house-bill/238

  4. Congress.gov. (2025). S.1399 – Health Tech Investment Act. https://www.congress.gov/bill/119th-congress/senate-bill/1399

  5. Congress.gov. (2024). H.R.7381 – HEALTH AI Act of 2024. https://www.congress.gov/bill/118th-congress/house-bill/7381

  6. Congress.gov. (2025). H.R. 1 – One Big Beautiful Bill Act. https://www.congress.gov/bill/119th-congress/house-bill/1

  7. Business Insider. (2025) Senators strike AI provision from ‘One Big Beautiful Bill’ in near-unanimous vote.
    https://www.businessinsider.com/senators-strike-ai-provision-from-big-beautiful-bill-2025-7

  8. Kramer, D. B., Xu, S., & Kesselheim, A. S. (2022). Regulation of Medical Devices in the United States and European Union. New England Journal of Medicine, 366(9), 848–855. https://doi.org/10.1056/NEJMhle111391

 


 

 

Previous
Previous

FDA New Drug Approvals: April–June 2025

Next
Next

FDA New Drug Approvals: January–March 2025