(EHL Poster) Evidence-Informed Policy Analysis: Standardizing AI in Shared Decision-Making

(EHL Poster) Evidence-Informed Policy Analysis: Standardizing AI in Shared Decision-Making

Wednesday, March 11, 2026 10:00 AM to 4:00 PM · 6 hr. (US/Pacific)
Level 5 | Palazzo G
Workforce ConneXtions

Information

This policy analysis examines gaps in strategies for the integration of ethical, transparent, and patient-centered artificial intelligence–supported shared decision-making (AI-SDM) in healthcare. Three policy alternatives, mandated AI reasoning with SDM documentation, voluntary AI standards, and third-party auditing, are evaluated using practical and evaluative criteria. Literature shows AI-SDM enhances patient trust, engagement, equity, and accountability, highlighting the need for thoughtful implementation. Future directions should prioritize stakeholder engagement, practical strategies, and ongoing evaluation to ensure integration of ethical, standardized, auditable, and patient-centered AI to support high-quality decision-making.

Level
Introductory
Format
Case Study
Learning Objective #1
Analyze the impact of policy alternatives on patient trust, engagement, and equity in AI-SDM.
Learning Objective #2
Compare mandated AI reasoning, voluntary AI standards, and third-party auditing in terms of transparency, accountability, and clinical outcomes.
Learning Objective #3
Evaluate the ethical and practical considerations of implementing AI reasoning with documented SDM in electronic health records (EHRs).
Learning Objective #4
Apply policy findings to develop actionable strategies for AI-SDM integration in clinical workflows.
Learning Objective #5
Translate policy insights into measurable improvements in patient-centered AI-supported care.
Session #
WFC-2.3

Log in

See all the content and easy-to-use features by logging in or registering!