Trusted by leading German cybersecurity professionals
AI Security & LLM Security CTF banner with hacker, brain with padlock, robotic arm, and people analyzing data.
Watch the introduction video
LLM Security Capture-the-Flag
Hands-on Training for Real-World AI & LLM Security Risks
LLMs introduce new attack vectors such as prompt injection and tool abuse.
Glowing, translucent brain illustration with white, cloud-like areas, above the title "seeing the beautiful brain today".
Why LLM Security Matters
LLM-based systems require hands-on security testing due to unique vulnerabilities.
01
Prompt Injection (Direct & Indirect)
02
Tool and Agent Abuse
03
Data Leakage & Unintended Behavior
A man in a suit stands before a glowing digital padlock, representing cybersecurity and data protection.
What This CTF Covers
Our Capture-the-Flag scenarios emphasize realism and enterprise relevance, covering critical aspects of LLM security:
Learn to identify and exploit vulnerabilities arising from malicious inputs.
Understand how attackers can force models to generate harmful or undesirable content.
Explore the dangers of compromised tool integration and agent manipulation.
Who This Is For
This CTF is designed for experienced professionals, not beginners or hobbyists.
A diverse group of women in business attire work on laptops and desktop computers in a dark office.
Five business people in suits standing on a reflective surface with a blue glowing network overlay.
Governance & Secure AI by Design
This CTF directly addresses the critical need for secure AI engineering and risk-based AI security practices, aligned with emerging global standards.
01
Secure AI Engineering
Integrate security from the ground up in your AI development lifecycle.
02
Regulatory Compliance
Understand the relevance to EU AI Act and ISO/IEC 42001.
Ready to Test Your Skills?
Access the AI & LLM Security CTF
Join the challenge and enhance your expertise in securing LLM-powered systems. The CTF is hosted on the VamiSec platform.