AI We Can Trust: Ensuring Clinical AI Applications Are Fit for Purpose

AI We Can Trust: Ensuring Clinical AI Applications Are Fit for Purpose

Tuesday, March 10, 2026 11:30 AM to 12:30 PM · 1 hr. (US/Pacific)
Level 3 | Murano 3301A
Industry Solutions Sessions
Artificial Intelligence in Healthcare

Information

Generative AI is reshaping clinician–computer interaction, but adoption is outpacing policy, governance, and validation—raising safety and privacy concerns. In this lively panel, Vanderbilt’s Dr. Peter Embí will introduce algorithmovigilance; Dr. Rebecca Mishuris (CMIO, MGB) will add real-world context; and Don Woodlock (Head of Healthcare Solutions, InterSystems) will contribute a vendor perspective on ensuring clinical AI is fit for purpose. 

Target Audience
Chief Innovation OfficerChief Quality Officer and Chief Clinical Transformation OfficerCIO/CTO/CTIO/Senior ITClinical InformaticistClinical TechnologistCMIO/CMOData Scientist
Level
Introductory
Format
Panel Discussion
Learning Objective #1
Explain the concept of algorithmovigilance and why continuous monitoring, validation, and governance are critical to ensuring the safety, effectiveness, and trustworthiness of clinical AI.
Learning Objective #2
Identify real-world risks and challenges associated with deploying generative AI in clinical settings including privacy, bias, workflow impact, and clinician trust, and how health systems are addressing them in practice.
Learning Objective #3
Evaluate practical approaches for determining whether a clinical AI application is “fit for purpose,” including the roles of healthcare organizations, clinicians, and vendors in responsible design, deployment, and lifecycle management.
Session #
25

Log in

See all the content and easy-to-use features by logging in or registering!