

Building AI Safety Guardrails Health Systems Can Trust
Information
As clinical AI rapidly enters healthcare workflows, health systems are increasingly concerned not just about accuracy, but also safety, transparency, and long-term clinical impact. Clinical AI must be governed with the same rigor as patient care itself. For a given vendor, what is your risk?
In this session, we’ll provide guidance on how to evaluate AI. Joshua Geleris, MD, head of product and data science at Smarter Technologies, will be joined by a leading advisor in clinical AI safety to introduce the vision behind the Clinical AI Safety Institute, a unique initiative focused on defining measurable, verifiable standards for safe clinical AI deployment. Drawing on real-world experience across dozens of health systems, they will examine how modern AI safety requires more than static frameworks—combining adaptive testing, human-in-the-loop review, audit metrics, and continuous feedback loops to prevent hallucinations, surface errors early, and protect both patients and clinicians. Attendees will leave with a practical understanding of what “clinical AI safety” looks like in practice, and how health systems can confidently adopt AI while maintaining trust, accountability, and clinical excellence.


