11
4 hours2 hours lecture + 2 hours hands-on labs

Adversarial Machine Learning and AI Red Teaming

This session explores offensive AI security, teaching students to attack and defend ML models and AI systems. Students learn adversarial attack techniques, model inversion, data poisoning, and AI red teaming methodologies to understand and mitigate AI-specific threats.

Learning Objectives

Execute adversarial attacks against ML models

Understand model vulnerabilities and exploitation

Conduct AI red team assessments

Implement defenses against adversarial attacks

Test AI systems for security weaknesses

Topics Covered

1

Adversarial machine learning fundamentals

2

Evasion attacks on ML models

3

Model inversion and extraction attacks

4

Data poisoning techniques

5

Prompt injection for LLMs

6

AI red teaming methodologies

7

Defending against adversarial attacks

8

ML model security testing frameworks

9

Bias exploitation in AI systems

Skills You'll Gain

Adversarial MLModel ExploitationData PoisoningAI Red TeamingLLM Security Testing

Ready to Get Started?

Join this session and advance your DevSecOps and AI security skills