Hacking AI systems — prompt injection to adversarial attacks
🤖 AI Pentest · Both Advanced 13 modules
2.2h 0 enrolled
The first practical course on offensive AI security. Covers OWASP LLM Top 10, prompt injection, jailbreaking, model extraction, adversarial examples, data poisoning, plugin attacks, RAG/vector DB attacks, and AI-specific tooling.