Ram Shankar Siva Kumar (@ram_ssk) 's Twitter Profile
Ram Shankar Siva Kumar

@ram_ssk

Data Cowboy @Microsoft. Yes, the job is as cool as it sounds. Tech Policy Fellow @UCBerkeley. @BKCHarvard Affiliate. ram-shankar.com

ID: 1195171140

linkhttps://www.amazon.com/Not-Bug-But-Sticker-Learning/dp/1119883989 calendar_today18-02-2013 22:57:58

6,6K Tweet

3,3K Followers

2,2K Following

Srini Iyer (@sriniiyer88) 's Twitter Profile Photo

New paper! Byte-Level models are finally competitive with tokenizer-based models with better inference efficiency and robustness! Dynamic patching is the answer! Read all about it here: dl.fbaipublicfiles.com/blt/BLT__Patch… (1/n)

Jiao Sun (@sunjiao123sun_) 's Twitter Profile Photo

Mitigating racial bias from LLMs is a lot easier than removing it from humans! Can’t believe this happened at the best AI conference NeurIPS Conference We have ethical reviews for authors, but missed it for invited speakers? 😡

Mitigating racial bias from LLMs is a lot easier than removing it from humans! 

Can’t believe this happened at the best AI conference <a href="/NeurIPSConf/">NeurIPS Conference</a> 

We have ethical reviews for authors, but missed it for invited speakers? 😡
Zhiyu Zoey Chen (@zhiyuchen4) 's Twitter Profile Photo

I'm shocked to see racism happening in academia again, at the best AI conference NeurIPS Conference. Targeting specific ethnic groups to describe misconduct is inappropriate and unacceptable. NeurIPS Conference must take a stand. We call on Rosalind Picard Massachusetts Institute of Technology (MIT) MIT Media Lab to retract and

I'm shocked to see racism happening in academia again, at the best AI conference <a href="/NeurIPSConf/">NeurIPS Conference</a>. Targeting specific ethnic groups to describe misconduct is inappropriate and unacceptable. <a href="/NeurIPSConf/">NeurIPS Conference</a> must take a stand. We call on Rosalind Picard <a href="/MIT/">Massachusetts Institute of Technology (MIT)</a> <a href="/medialab/">MIT Media Lab</a> to retract and
Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

Microsoft presents "Lessons From Red Teaming 100 Generative AI Products" 1. Understand what the system can do and where it is applied 2. You don’t have to compute gradients to break an AI system 3. AI red teaming is not safety benchmarking 4. Automation can help cover more of

Microsoft presents "Lessons From Red Teaming 100 Generative AI Products"

1. Understand what the system can do and where it is applied
2. You don’t have to compute gradients to break an AI system
3. AI red teaming is not safety benchmarking
4. Automation can help cover more of
Blake Bullwinkel (@blakebullwinkel) 's Twitter Profile Photo

The AI Red Team at Microsoft has now red teamed 100+ GenAI products, and we recently released a paper with some of the main lessons we have learned. We also present five real case studies and a framework to model attacks and system failures. Check it out: arxiv.org/abs/2501.07238

The AI Red Team at Microsoft has now red teamed 100+ GenAI products, and we recently released a paper with some of the main lessons we have learned. We also present five real case studies and a framework to model attacks and system failures. Check it out: arxiv.org/abs/2501.07238
Paul Röttger (@paul_rottger) 's Twitter Profile Photo

Today, we are releasing MSTS, a new Multimodal Safety Test Suite for vision-language models! MSTS is exciting because it tests for safety risks *created by multimodality*. Each prompt consists of a text + image that *only in combination* reveal their full unsafe meaning. 🧵

Today, we are releasing MSTS, a new Multimodal Safety Test Suite for vision-language models!

MSTS is exciting because it tests for safety risks *created by multimodality*. Each prompt consists of a text + image that *only in combination* reveal their full unsafe meaning.

🧵