

Black Boxes in White Coats: Making Artificial Intelligence Devices Secure by Design
Information
Artificial intelligence (AI) is accelerating into the medical device landscape at an unprecedented pace—propelled by legislative momentum, federal funding incentives and expanding clinical utility. As Congress prepares to increase reimbursement for AI-enabled technologies, the floodgates are about to open. But with exponential innovation comes exponential risk. This session—featuring members of the Healthcare and Public Health Sector Coordinating Council’s (HSCC) AI in Healthcare Task Group—will present critical findings from the group’s forthcoming 2026 report, AI Secure by Design for Medical Devices. As AI becomes the new battleground for cyberattacks, traditional device security frameworks are no longer sufficient. Compromised algorithms can not only mislead clinicians but also actively weaponize clinical operations, posing a dual threat to patient safety and organizational integrity. Attendees will explore the real-world consequences of hijacked AI, regulatory gaps, and why securing AI—not just the device—must be a top priority. The session will deliver practical guidance for medical device manufacturers and healthcare systems, including procurement safeguards, governance strategies, and design principles tailored to AI’s unique attack surface. Take-home message: AI will revolutionize care—but without secure-by-design principles, it may also become one of healthcare’s greatest vulnerabilities. Now is the moment to act.

