AI & LLM Penetration Testing

Hacking AI systems — prompt injection to adversarial attacks

🤖 AI Pentest · Both Advanced 13 modules 2.2h 0 enrolled
The first practical course on offensive AI security. Covers OWASP LLM Top 10, prompt injection, jailbreaking, model extraction, adversarial examples, data poisoning, plugin attacks, RAG/vector DB attacks, and AI-specific tooling.
Instructor
lazyhackers
🔒
Premium Course

Login or subscribe to access this course.

Login View Plans

Curriculum — 7 sections · 13 modules · 2.2h

📄
AI Security Mindset and OWASP LLM Top 10
Lesson · 10m ·Free Preview
📄
How LLMs Work - Pentest Perspective
Lesson · 10m · Locked