Clinical AI Safety Isn’t Magic: Building Frameworks Health Systems Can Trust

Clinical AI Safety Isn’t Magic: Building Frameworks Health Systems Can Trust

Wednesday, March 11, 2026 10:45 AM to 11:15 AM · 30 min. (US/Pacific)
Exhibition Main Stage | Level 2 | Hall A | Booth 270
Exhibition Main Stage
Artificial Intelligence in Healthcare

Information

As clinical AI rapidly enters healthcare workflows, health systems are increasingly concerned not just about accuracy, but also safety, transparency, and long-term clinical impact. Clinical AI must be governed with the same rigor as patient care itself. 

In this session, Jonathan H. Chen, MD, PhD (Stanford University), Joshua Geleris, MD (SmarterDx), and Scott Fleming, PhD (SmarterDx) explore how healthcare organizations can safely evaluate, adopt, and govern clinical AI as it moves from experimental tools to real-world clinical infrastructure. 

Drawing on Stanford research and real-world deployment experience, the speakers examine the rapid progress of large language models in medical reasoning while highlighting why benchmark performance alone is insufficient for safe deployment. The session presents a practical framework for AI safety that treats AI as a dynamic production system, offering health system leaders a clear checklist for evaluating vendors and ensuring clinical AI is transparent, accountable, and safe for patient care.

 

Target Audience
CMIO/CMOPhysician or Physician’s Assistant
Format
30-Minute Main Stage Session
Session #
MS11

Log in

See all the content and easy-to-use features by logging in or registering!