OpenAI's 'Jailbreak-Proof' New Models? Hacked on Day One

OpenAI's 'Jailbreak-Proof' New Models? Hacked on Day One




Hours after releasing its first open-weight models in years with claims of robust safety measures, OpenAI’s GPT-OSS has been cracked by notorious AI jailbreaker Pliny the Liberator


Leave a Comment

Your email address will not be published. Required fields are marked *