This session explores offensive AI security, teaching students to attack and defend ML models and AI systems. Students learn adversarial attack techniques, model inversion, data poisoning, and AI red teaming methodologies to understand and mitigate AI-specific threats.
Execute adversarial attacks against ML models
Understand model vulnerabilities and exploitation
Conduct AI red team assessments
Implement defenses against adversarial attacks
Test AI systems for security weaknesses
Adversarial machine learning fundamentals
Evasion attacks on ML models
Model inversion and extraction attacks
Data poisoning techniques
Prompt injection for LLMs
AI red teaming methodologies
Defending against adversarial attacks
ML model security testing frameworks
Bias exploitation in AI systems
Join this session and advance your DevSecOps and AI security skills