Black Boxes in White Coats: Making Artificial Intelligence Devices Secure by Design

Black Boxes in White Coats: Making Artificial Intelligence Devices Secure by Design

Thursday, March 12, 2026 10:00 AM to 11:00 AM · 1 hr. (US/Pacific)
Level 5 | Palazzo L
Education Sessions
Artificial Intelligence in Healthcare

Information

Artificial intelligence (AI) is accelerating into the medical device landscape at an unprecedented pace—propelled by legislative momentum, federal funding incentives and expanding clinical utility. As Congress prepares to increase reimbursement for AI-enabled technologies, the floodgates are about to open. But with exponential innovation comes exponential risk. This session—featuring members of the Healthcare and Public Health Sector Coordinating Council’s (HSCC) AI in Healthcare Task Group—will present critical findings from the group’s forthcoming 2026 report, AI Secure by Design for Medical Devices. As AI becomes the new battleground for cyberattacks, traditional device security frameworks are no longer sufficient. Compromised algorithms can not only mislead clinicians but also actively weaponize clinical operations, posing a dual threat to patient safety and organizational integrity. Attendees will explore the real-world consequences of hijacked AI, regulatory gaps, and why securing AI—not just the device—must be a top priority. The session will deliver practical guidance for medical device manufacturers and healthcare systems, including procurement safeguards, governance strategies, and design principles tailored to AI’s unique attack surface. Take-home message: AI will revolutionize care—but without secure-by-design principles, it may also become one of healthcare’s greatest vulnerabilities. Now is the moment to act.

Topic
Clinical AI Solutions for Care Delivery and Patient Outcomes
Target Audience
CFO/VP Finance/Compliance OfficerCISO/CSOClinical Engineering Professional
Level
Introductory
Format
Best Practice
CEU Type
CAHIMSCPDHTSCPHIMSPMI/PDU
Contact Hours
1.00
Learning Objective #1
Explain the emerging risks introduced by AI-enabled medical devices—including model drift, adversarial inputs and data poisoning—and how these risks differ from traditional device cybersecurity concerns
Learning Objective #2
Identify the critical stakeholders across medical device manufacturers and healthcare delivery organizations responsible for securing AI throughout the development, deployment and operational lifecycle
Learning Objective #3
Discuss actionable strategies for implementing “secure by design” principles specific to AI, including procurement language, secure model update pipelines and runtime protections
Session #
189

Log in

See all the content and easy-to-use features by logging in or registering!