Your GenAI Stack is Already an Attack Surface: Here’s what CISOs and Security Leaders must know

Overview:

In 2023, Samsung experienced a confidential data leakage when employees put sensitive source code and internal meeting notes into ChatGPT. This data was exposed to OpenAI and potentially other users.

Your organisation’s data is not secure in GenAI platforms, and traditional penetration testing misses GenAI logic manipulation. Would your AI stack survive a red team exercise?

This live Q&A tackles these concerns head-on. It’s a focused discussion on the real security implications of deploying generative AI in production. From data leakage incidents to emerging risks such as model poisoning and output manipulation, the panel will examine how improper LLM usage and weak controls can expose sensitive data, compromise model integrity, and create entirely new attack paths.

Built for CISOs, Heads of IT, CTOs, and senior security leaders, the session will highlight how red teaming, threat modelling, and security by design must evolve for AI workloads on AWS, particularly in the face of non-deterministic testing blind spots, IAM misconfigurations, and observability gaps.

Key Takeaways:

"A 2024 report found 27% of Australian workers were using generative AI secretly at work - also called ‘Shadow AI."

This Webinar is For:

This Webinar is Not For:

Webinar attendees will receive:

CLOSE MENU