Your organisation’s data is not secure in GenAI platforms, and traditional penetration testing misses GenAI logic manipulation. Would your AI stack survive a red team exercise?
This live Q&A tackles these concerns head-on. It’s a focused discussion on the real security implications of deploying generative AI in production. From data leakage incidents to emerging risks such as model poisoning and output manipulation, the panel will examine how improper LLM usage and weak controls can expose sensitive data, compromise model integrity, and create entirely new attack paths.
Built for CISOs, Heads of IT, CTOs, and senior security leaders, the session will highlight how red teaming, threat modelling, and security by design must evolve for AI workloads on AWS, particularly in the face of non-deterministic testing blind spots, IAM misconfigurations, and observability gaps.
CLOSE MENU