Students pivot to the defender/builder perspective—creating an AI-integrated application and ensuring it's built securely. This ties together DevSecOps (for building and deploying the app) and the offensive mindset (anticipating how an attacker might target the app, especially its AI components). Students will design and implement a full-stack app (front-end + back-end + AI service) using Python and/or TypeScript, and deploy it with a CI/CD pipeline.
Design secure architecture for AI-integrated applications
Implement full-stack apps with embedded AI services
Mitigate AI-specific vulnerabilities like prompt injection
Deploy applications with secure CI/CD pipelines
Apply threat modeling to AI application security
Secure architecture design for AI applications
Full-stack development with Python/TypeScript
AI service integration (LLM APIs, ML models)
Input validation and sanitization for AI systems
Prompt injection prevention techniques
Credential and API key security
Secure data handling for ML training data
CI/CD pipeline with security gates
Container security for AI workloads
Monitoring and logging for AI applications
Threat modeling AI-specific attack vectors
Join this session and advance your DevSecOps and AI security skills