Jailbreak Anthropic’s new AI safety system for a $15,000 reward

In testing, the technique helped Claude block 95% of jailbreak attempts. But the process still needs more ‘real-world’ red-teaming.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top