ødin (@0dinai) 's Twitter Profile
ødin

@0dinai

The GenAI Bug Bounty Program from @Mozilla

ID: 1791199225058910209

linkhttps://0din.ai calendar_today16-05-2024 20:09:32

61 Tweet

695 Followers

46 Following

ødin (@0dinai) 's Twitter Profile Photo

Check out this weeks #0Din #GenAI #BugBounty Submissions! Also make sure you read the 0Din blog that was published on Monday for #ChatGPT-4o Guardrail #Jailbreak by MarcoFigueroa

Anthropic (@anthropicai) 's Twitter Profile Photo

New research collaboration: “Best-of-N Jailbreaking”. We found a simple, general-purpose method that jailbreaks (bypasses the safety features of) frontier AI models, and that works across text, vision, and audio.

Mike Takahashi (@taksec) 's Twitter Profile Photo

Excited to be on the HackenProof Podcast! Chatting w/ Alex about: 🤖 AI Bug Bounty ⚠️ Prompt Injection 💻 Cybersecurity Career Path Don't miss out! youtube.com/watch?v=hHeoXU…

ødin (@0dinai) 's Twitter Profile Photo

Thanks for the shout out! MarcoFigueroa was just on Hacker Valley Media podcast discussing the program as well if you'd like to learn more: youtube.com/watch?v=i-sN3P…

MarcoFigueroa (@marcofigueroa) 's Twitter Profile Photo

🚨Breaking: Data poisoning isn’t just theory anymore! 🚀Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 github repository was fine-tuned into Deepseek DeepThink (R1) and has “liberated” an AI from its own guardrails, no internet connection required. 🐍Dive into my new blog, “Poison in the Pipeline:

MarcoFigueroa (@marcofigueroa) 's Twitter Profile Photo

🚨 BREAKING: 0DIN.ai's explosive new blog is LIVE! 🔍 Discover how a clever ASCII encoding technique bypassed Amazon’s Rufus AI guardrails, triggering product recommendations for SARIN GAS! 🤯 Uncover the shocking critical security flaw and why next-gen GenAI testing is

ødin (@0dinai) 's Twitter Profile Photo

Public Disclosure: Amazon Rufus Guardrail Jailbreak via ASCII Integer Encoding 0din.ai/disclosures/f4… This vulnerability was discovered by the 0Din Team. More information available in the tandem blog at: 0din.ai/blog/odin-secu…

Raj Singh (@mobileraj) 's Twitter Profile Photo

When you publish a ChatGPT customGPT and then your colleague @marcofigueroa who runs 0DIN.ai proceeds to run a jailbreak and then sends you all the knowledge you uploaded. There are LLM exploits everywhere.

ødin (@0dinai) 's Twitter Profile Photo

Introducing the Jailbreak Evaluation Framework (JEF). Open source tooling that leverages factors like blast radius, re-targetability, and output fidelity to prioritize and stack-rank jailbreak tactics. Read more: 0din.ai/blog/quantifyi…

Introducing the Jailbreak Evaluation Framework (JEF). Open source tooling that leverages factors like blast radius, re-targetability, and output fidelity to prioritize and stack-rank jailbreak tactics. Read more:

0din.ai/blog/quantifyi…
MarcoFigueroa (@marcofigueroa) 's Twitter Profile Photo

🚨 Announcement 🚨 🚨 AI jailbreaks just got a score! We at 0din.ai are excited to introduce JEF to the LLM prompting jailbreak community! How do you measure the effectiveness of a jailbreak attack on an LLM? 🤖🔓 JEF (Jailbreak Evaluation Framework) is a

ødin (@0dinai) 's Twitter Profile Photo

We’re thrilled to unveil two cornerstone products in GenAI security: 🛡️ 0DIN Threat Intelligence Feed ⚔️ 0DIN Model Scanner Together, these tools operationalize real-world adversary techniques and allow for continuous validation: Read more: 0din.ai/blog/odin-prod…