

Hidden Dangers of AI: What You Don't See CAN Hurt You
Information
Artificial Intelligence is transforming healthcare and security operations, but its potential risks can remain dangerously overlooked. This session sheds light on the hidden dangers organizations face when implementing AI tools – whether leveraging generative AI, agentic AI, or building proprietary large language models (LLMs). Attendees will learn how "Shadow AI" usage – the adoption of AI applications without proper governance or visibility – can lead to unintended vulnerabilities like data leakage, sensitive prompt exposure, or unauthorized external access.
The discussion will also highlight specific risks, including the growing use of webhooks within AI workflows, improper configuration of local LLMs, and challenges posed by unsecured API connections. Designed as a thought-provoking exploration of risks, this session will empower attendees to critically evaluate their visibility and security strategy in an environment increasingly adopting AI.
Rather than focusing purely on solutions, this session emphasizes uncovering risks and challenging attendees to assess their readiness to secure AI implementations effectively and holistically. Attendees will leave with provocative questions to bring back to their security teams to guide internal discussions.

