Prizes Available 
$0
Total Prizes: $0
Prizes Awarded: $0
Trending Challenges 
| Details |
|---|
In-progress Visual VulnerabilitiesUse image inputs to jailbreak leading vision-enabled AI models. Visual prompt injections, chem/bio/cyber weaponization, privacy violations, and more. Open Challenge View Leaderboard |
| Details |
|---|
In-progress Agent Red-TeamingPush the limits of direct and indirect attacks on AI agents. Sponsored by: AI Security Institute OpenAI Anthropic Google DeepMind Open Challenge View Leaderboard |
Play within our long-running challenges and break AI models in your free time
View Leaderboard
Connect with AI enthusiasts and experts from around the world
Join Discord Server
Substantial Prizes 
Win recognition and rewards for your innovative jailbreaks
View Prizes
Featured Challenges 
Compete in these AI security challenges to win prizes and improve your skills.
| Details |
|---|
In-progress Visual VulnerabilitiesUse image inputs to jailbreak leading vision-enabled AI models. Visual prompt injections, chem/bio/cyber weaponization, privacy violations, and more. Open Challenge View Leaderboard |
In-progress Agent Red-TeamingPush the limits of direct and indirect attacks on AI agents. Sponsored by: AI Security Institute OpenAI Anthropic Google DeepMind Open Challenge View Leaderboard |
Completed Harmful AI AssistantJailbreak the helpful AI assistants to aid in harmful tasks across six areas. Open Challenge View Leaderboard |
Completed Multi-Turn Harmful OutputsElicit harmful outputs from LLMs through long-context interactions across multiple messages. Open Challenge View Leaderboard |
Completed Multimodal JailbreaksJailbreak multimodal LLMs through a combination of visual and text inputs. Open Challenge View Leaderboard |
Completed Harmful Code GenerationFind unique ways to return functional code that completes harmful tasks such as opening circuit breakers to cause a system-wide blackout. Open Challenge View Leaderboard |
Completed Revealing Hidden CoTAttack OpenAI's o1 model to try and reveal its internal chain of thought (CoT) that's used for its complex reasoning. Sponsored by: OpenAI Open Challenge View Leaderboard |
In-progress Single Turn Harmful OutputsAttempt to break various large language models (LLMs) using a singular chat message. Open Challenge View Leaderboard |
Top Winners
| Rank | Participant | Prize |
|---|---|---|
1 |
Solomon Zoe | $10,165 |
2 |
Clovis Mint | $7,072 |
3 |
Scrattlebeard | $5,650 |
Visual Vulnerabilities
Agent Red-Teaming
2


