tsvetshop (@tsvetshop) 's Twitter Profile
tsvetshop

@tsvetshop

Group account for Prof. Yulia Tsvetkov's lab at @uwnlp. We work on low-resource, multilingual, social-oriented NLP. Details on our website:

ID: 1376970544390750214

linkhttp://tsvetshop.github.io calendar_today30-03-2021 18:54:04

111 Tweet

909 Followers

133 Following

Faeze Brahman (@faeze_brh) 's Twitter Profile Photo

🤖 When and how should AI models not comply with user requests? Our latest work with Sachin Kumar at Ai2 dives into this question, expanding the scope of model noncompliance beyond just refusing "unsafe" queries. 1/n🧵 #LLMs #refusal #noncompliance #responsible_ai

🤖 When and how should AI models not comply with user requests?
Our latest work with <a href="/shocheen/">Sachin Kumar</a> at <a href="/allen_ai/">Ai2</a> dives into this question, expanding the scope of model noncompliance beyond just refusing "unsafe" queries. 1/n🧵

#LLMs #refusal #noncompliance #responsible_ai
Alisa Liu (@alisawuffles) 's Twitter Profile Photo

Accepted to COLM! 🎉 Proxy-tuning achieves finetuning, but at decoding-time! Enables easy customization, adapting to private data, and tuning an arbitrary # of models for the cost of training one proxy. Super excited to be a part of the first Conference on Language Modeling 😀

ACL 2025 (@aclmeeting) 's Twitter Profile Photo

🏆 ACL Best Social Impact Paper Awards: - How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs by Zeng et al. - DIALECTBENCH: An NLP Benchmark for Dialects, Varieties, and Closely-Related Languages by Faisal et al.

Oreva Ahia (@orevaahia) 's Twitter Profile Photo

Thrilled to have won the Best Social Impact Paper Award at #ACL2024 ACL 2025 for our work; DialectBench! Big thanks to all my amazing collaborators who made this possible!

tsvetshop (@tsvetshop) 's Twitter Profile Photo

Huge congrats to Oreva Ahia and Shangbin Feng for winning awards at #ACL2024! DialectBench Best Social Impact Paper Award arxiv.org/abs/2403.11009 Don't Hallucinate, Abstain Area Chair Award, QA track & Outstanding Paper Award arxiv.org/abs/2402.00367

Bingbing Wen (@bingbingwen1) 's Twitter Profile Photo

🚨Curious how LLMs deal with uncertainty? In our new #EMNLP2024 Findings paper, we dive deep into their ability to abstain from answering when given insufficient or incorrect context in science questions 💡arxiv.org/pdf/2404.12452 Joint work w/ Bill Howe Lucy Lu Wang UW iSchool

🚨Curious how LLMs deal with uncertainty? In our new #EMNLP2024 Findings paper, we dive deep into their ability to abstain from answering when given insufficient or incorrect context in science questions 💡arxiv.org/pdf/2404.12452 
Joint work w/ <a href="/billghowe/">Bill Howe</a> <a href="/lucyluwang/">Lucy Lu Wang</a> <a href="/uw_ischool/">UW iSchool</a>
Jihan Yao (@jihan_yao) 's Twitter Profile Photo

🚀Varying Shades of Wrong: When no correct answers exist, can alignment still unlock better outcome? Introducing wrong-over-wrong alignment, where models learn to prefer "less-wrong" over "more-wrong". Surprisingly, aligning with wrong answers only can lead to correct solutions!

🚀Varying Shades of Wrong: When no correct answers exist, can alignment still unlock better outcome?

Introducing wrong-over-wrong alignment, where models learn to prefer "less-wrong" over "more-wrong". Surprisingly, aligning with wrong answers only can lead to correct solutions!
Shangbin Feng (@shangbinfeng) 's Twitter Profile Photo

👀 How to find a better adapted model? ✨ Let the models find it for you! 👉🏻 Introducing Model Swarms, multiple LLM experts collaboratively search for new adapted models in the weight space and discover their new capabilities. 📄 Paper: arxiv.org/abs/2410.11163

👀 How to find a better adapted model?
✨ Let the models find it for you!

👉🏻 Introducing Model Swarms, multiple LLM experts collaboratively search for new adapted models in the weight space and discover their new capabilities.

📄 Paper: arxiv.org/abs/2410.11163
Wenda Xu (@wendaxu2) 's Twitter Profile Photo

Searching for the ultimate LLM Knowledge distillation! Want one that excels in both task-specific and task-agnostic settings. Could it outperform others on varying data sizes and model initializations? Our Speculative Knowledge distillation might be the answer🚀 Google

Searching for the ultimate LLM Knowledge distillation! Want one that excels in both task-specific and task-agnostic settings. Could it outperform others on varying data sizes and model initializations? Our Speculative Knowledge distillation might be the answer🚀 <a href="/Google/">Google</a>
Shangbin Feng (@shangbinfeng) 's Twitter Profile Photo

One LLM is not enough. We need multi-LLM collaboration for collaborative development and compositional intelligence. Our thoughts: arxiv.org/abs/2502.04506

Stella Li (@stellalisy) 's Twitter Profile Photo

Asking the right questions can make or break decisions in high-stake fields like medicine, law, and beyond✴️ Our new framework ALFA—ALignment with Fine-grained Attributes—teaches LLMs to PROACTIVELY seek information through better questions🏥❓ (co-led with Jimin Mun) 👉🏻🧵

Asking the right questions can make or break decisions in high-stake fields like medicine, law, and beyond✴️
Our new framework ALFA—ALignment with Fine-grained Attributes—teaches LLMs to PROACTIVELY seek information through better questions🏥❓ (co-led with <a href="/jiminmun_/">Jimin Mun</a>)
👉🏻🧵
tsvetshop (@tsvetshop) 's Twitter Profile Photo

Effective decision-making starts with asking the right questions. Our new framework, ALFA, teaches LLMs to ask questions through fine-grained attributes in expert domains. Excited to see where this takes the next generation of effective LLM assistants and agents!

tsvetshop (@tsvetshop) 's Twitter Profile Photo

Congratulations to our amazingly talented lab member Shangbin Feng on getting 3 PhD fellowship award just in the past few weeks 🎉🎉🎉 thank you so much to IBM, Jane Street, and Baidu. Very well deserved, Shangbin! 🎉❤️💪