Building AI Safety Guardrails Health Systems Can Trust

Building AI Safety Guardrails Health Systems Can Trust

Wednesday, March 11, 2026 10:45 AM to 11:15 AM · 30 min. (US/Pacific)
Exhibition Main Stage | Level 2 | Hall A | Booth 270
Exhibition Main Stage
Artificial Intelligence in Healthcare

Information

As clinical AI rapidly enters healthcare workflows, health systems are increasingly concerned not just about accuracy, but also safety, transparency, and long-term clinical impact. Clinical AI must be governed with the same rigor as patient care itself. For a given vendor, what is your risk?

In this session, we’ll provide guidance on how to evaluate AI. Joshua Geleris, MD, head of product and data science at Smarter Technologies,  will be joined by a leading advisor in clinical AI safety to introduce the vision behind the Clinical AI Safety Institute, a unique initiative  focused on defining measurable, verifiable standards for safe clinical AI deployment. Drawing on real-world experience across dozens of health systems, they will examine how modern AI safety requires more than static frameworks—combining adaptive testing, human-in-the-loop review, audit metrics, and continuous feedback loops to prevent hallucinations, surface errors early, and protect both patients and clinicians. Attendees will leave with a practical understanding of what “clinical AI safety” looks like in practice, and how health systems can confidently adopt AI while maintaining trust, accountability, and clinical excellence.

 

Target Audience
CMIO/CMOPhysician or Physician’s Assistant
Format
30-Minute Main Stage Session
Session #
MS11

Log in

See all the content and easy-to-use features by logging in or registering!