Leonie Weissweiler (@laweissweiler) 's Twitter Profile
Leonie Weissweiler

@laweissweiler

postdoc @UT_Linguistics with @kmahowald | PhD @cislmu, prev. @Princeton @LTIatCMU @CambridgeLTL computational linguistics, construction grammar, morphology

ID: 1212484521678929920

linkhttp://leonieweissweiler.github.io/ calendar_today01-01-2020 21:23:39

224 Tweet

1,1K Followers

320 Following

Leonie Weissweiler (@laweissweiler) 's Twitter Profile Photo

LLMs have been tested for their learning of rules, but investigating a phenomenon with natural variation shows that what may look like rule learning is actually an analogical process! Read the paper to see how humans and LLMs know if this is informativeness or informativity! 🤔

Leonie Weissweiler (@laweissweiler) 's Twitter Profile Photo

✨New paper ✨ RoBERTa knows the difference between "so happy that you're here", "so certain that I'm right" and "so happy that I cried"! Exciting result (and more) from Josh Rozner along with Cory Shain, Kyle Mahowald and myself, go check it out!

✨New paper ✨

RoBERTa knows the difference between "so happy that you're here", "so certain that I'm right" and "so happy that I cried"! 

Exciting result (and more) from Josh Rozner along with <a href="/coryshain/">Cory Shain</a>, <a href="/kmahowald/">Kyle Mahowald</a> and myself, go check it out!
Leshem Choshen C U @ ICLR 🤖🤗 (@lchoshen) 's Twitter Profile Photo

Models have preferences like giving inanimate 📦 stuff to animate 👳 Is it that they just saw a lot of such examples in pretraining or is it generalization and deeper understanding? alphaxiv.org/pdf/2503.20850 Qing Yao Kanishka Misra 🌊 Leonie Weissweiler Kyle Mahowald

Models have preferences like giving inanimate 📦 stuff to animate 👳
Is it that they just saw a lot of such examples in pretraining or is it generalization and deeper understanding?
alphaxiv.org/pdf/2503.20850
<a href="/qyao23/">Qing Yao</a> <a href="/kanishkamisra/">Kanishka Misra 🌊</a> <a href="/LAWeissweiler/">Leonie Weissweiler</a> <a href="/kmahowald/">Kyle Mahowald</a>
Leonie Weissweiler (@laweissweiler) 's Twitter Profile Photo

We can use LMs to test hypotheses about the language network and how everything connects! Here, we find that dative alternation preferences are learned from dative-specific input statistics *and* from more general short-first preferences. Great work by Qing Yao, go follow him!

Leonie Weissweiler (@laweissweiler) 's Twitter Profile Photo

🌍📣🥳 I could not be more excited for this to be out! With a fully automated pipeline based on UD, 43 non-Indoeuropean languages, and the best LLMs only scoring 90%, I hope this will be a challenging and interesting benchmark for multilingual NLP. Go test your language models!

Catherine Arnett (@linguist_cat) 's Twitter Profile Photo

Our Goldfish models beat out Llama 3 70B, Aya 32B, and Gemma 27B for 14 languages 🤩 Really highlights the blessings of monolinguality! Great paper Jaap Jumelet Leonie Weissweiler - this will enable some extremely exciting work!

Stanford NLP Group (@stanfordnlp) 's Twitter Profile Photo

For this week’s NLP Seminar, we are thrilled to host Leonie Weissweiler to talk about Rethinking Linguistic Generalisation in LLMs When: 4/10 Thurs 11am PT Non-Stanford affiliates registration form (closed at 9am PT on the talk day): forms.gle/Ecc39jiuVMviby…

Kwanghee Choi (@juice500ml) 's Twitter Profile Photo

Can self-supervised models 🤖 understand allophony 🗣? Excited to share my new #NAACL2025 paper: Leveraging Allophony in Self-Supervised Speech Models for Atypical Pronunciation Assessment arxiv.org/abs/2502.07029 (1/n)

Can self-supervised models 🤖 understand allophony 🗣? Excited to share my new #NAACL2025 paper: Leveraging Allophony in Self-Supervised Speech Models for Atypical Pronunciation Assessment arxiv.org/abs/2502.07029 (1/n)
Valentin Hofmann (@vjhofmann) 's Twitter Profile Photo

Thrilled to share that our paper is out in PNASNews today! 🎉 We show that linguistic generalization in language models can be due to underlying analogical mechanisms. Huge shoutout to my amazing co-authors Leonie Weissweiler, David Mortensen, hinrich schuetze, and Janet Pierrehumbert!

Ai2 (@allen_ai) 's Twitter Profile Photo

Do LLMs learn language via rules or analogies? This could be a surprise to many – models rely heavily on stored examples and draw analogies when dealing with unfamiliar words, much as humans do. Check out this new study led by Valentin Hofmann to learn how they made the discovery 💡

Joshua Rozner (@jsrozner) 's Twitter Profile Photo

BabyLMs first constructions: new study on usage-based language acquisition in LMs w/ Leonie Weissweiler, Cory Shain. Simple interventions show that LMs trained on cognitively plausible data acquire diverse constructions (cxns) babyLM 🧵