WARNING! EXTREME CHALLENGES AHEAD!

Prizes Available

$0 Total Prizes: $0
Prizes Awarded: $0

Trending Challenges

Details
$0 of $60,000 awarded
In-progress Visual Vulnerabilities
Use image inputs to jailbreak leading vision-enabled AI models. Visual prompt injections, chem/bio/cyber weaponization, privacy violations, and more.
Open Challenge View Leaderboard
Details
$170,000 to be awarded
In-progress Agent Red-Teaming
Push the limits of direct and indirect attacks on AI agents.
Sponsored by:
AI Security Institute
OpenAI
Anthropic
Google DeepMind
Open Challenge View Leaderboard
Ongoing Competition - Break AI Models!

Play within our long-running challenges and break AI models in your free time
View Leaderboard

Global Participation - Connect with AI Experts!

Connect with AI enthusiasts and experts from around the world

Join Discord Server

Substantial Prizes

Win recognition and rewards for your innovative jailbreaks
View Prizes

Featured Challenges

Compete in these AI security challenges to win prizes and improve your skills.

Details
$0 of $60,000 awarded
In-progress Visual Vulnerabilities
Use image inputs to jailbreak leading vision-enabled AI models. Visual prompt injections, chem/bio/cyber weaponization, privacy violations, and more.
Open Challenge View Leaderboard
$170,000 to be awarded
In-progress Agent Red-Teaming
Push the limits of direct and indirect attacks on AI agents.
Sponsored by:
AI Security Institute
OpenAI
Anthropic
Google DeepMind
Open Challenge View Leaderboard
All $40,000 awarded
Completed Harmful AI Assistant
Jailbreak the helpful AI assistants to aid in harmful tasks across six areas.
Open Challenge View Leaderboard
All $7,000 awarded
Completed Multi-Turn Harmful Outputs
Elicit harmful outputs from LLMs through long-context interactions across multiple messages.
Open Challenge View Leaderboard
All $6,000 awarded
Completed Multimodal Jailbreaks
Jailbreak multimodal LLMs through a combination of visual and text inputs.
Open Challenge View Leaderboard
All $6,000 awarded
Completed Harmful Code Generation
Find unique ways to return functional code that completes harmful tasks such as opening circuit breakers to cause a system-wide blackout.
Open Challenge View Leaderboard
All $1,000 awarded
Completed Revealing Hidden CoT
Attack OpenAI's o1 model to try and reveal its internal chain of thought (CoT) that's used for its complex reasoning.
Sponsored by: OpenAI
Open Challenge View Leaderboard
$38,000 of $42,000 awarded
In-progress Single Turn Harmful Outputs
Attempt to break various large language models (LLMs) using a singular chat message.
Open Challenge View Leaderboard

Top Winners

Rank Participant Prize
1 Solomon Zoe $10,165
2 Clovis Mint $7,072
3 Scrattlebeard $5,650