Kyle Mahowald (@kmahowald) 's Twitter Profile
Kyle Mahowald

@kmahowald

UT Austin computational linguist. cognition, psycholinguistics, data, NLP, crosswords.

ID: 22515678

linkhttp://mahowak.github.io calendar_today02-03-2009 18:28:50

638 Tweet

1,1K Followers

719 Following

Tom McCoy (@rtommccoy) 's Twitter Profile Photo

One of my major reasons for becoming a scientist was to build a larger platform for making jokes. Happy to say it's all coming together!! (The joke in question is on page 8 here: pnas.org/doi/10.1073/pn…)

Greg Durrett (@gregd_nlp) 's Twitter Profile Photo

Two awards for UT Austin papers! Salience prediction of inquisitive questions by Yating Wu Ritika Mangla Alex Dimakis me Jessy Li Learning AANNs and insights about grammatical generalization in pre-training by Kanishka Misra 🌊 & Kyle Mahowald Congrats to all the awardees!

Yating Wu (@yatingwu96) 's Twitter Profile Photo

I'm thrilled to announce our paper "Which questions should I answer? Salience Prediction of Inquisitive Questions" has won an outstanding paper in EMNLP 2024🥳🥳. Thank you so much for my amazing co-authors and advisors!!! Ritika Mangla, Alex Dimakis, Greg Durrett, Jessy Li

I'm thrilled to announce our paper "Which questions should I answer? Salience Prediction of Inquisitive Questions" has won an outstanding paper in EMNLP 2024🥳🥳. Thank you so much for my amazing co-authors and advisors!!! <a href="/ritikarmangla/">Ritika Mangla</a>, <a href="/AlexGDimakis/">Alex Dimakis</a>, <a href="/gregd_nlp/">Greg Durrett</a>, <a href="/jessyjli/">Jessy Li</a>
Jessy Li (@jessyjli) 's Twitter Profile Photo

Thrilled that we won an 🥂Outstanding Paper Award at #EMNLP2024! Super validating for using computational methods to investigate discourse processing via QUDs. Super proud of my students Yating Wu Ritika Mangla, amazing team Alex Dimakis Greg Durrett

Kyle Mahowald (@kmahowald) 's Twitter Profile Photo

Grad school is the time to find a friend who will not only take the time to ask after a small typo you made but then rejects your explanation for the typo and posts the exchange on Twitter.

UT Linguistics Dept (@ut_linguistics) 's Twitter Profile Photo

UT Linguistics is on a roll at #EMNLP2024! Congrats to Prof. Kyle Mahowald (Kyle Mahowald) and Prof. Jessy Li (Jessy Li), and their respective coauthors, for winning Outstanding Paper Awards!

Jackson Petty (@jowenpetty) 's Twitter Profile Photo

Happening in 15 mins at Riverfront 102! - Interested in how training on code impacts downstream performance? - Curious about how we evaluate OOD compositional generalization? - Want to know if LLMs are conscious? At least two of these questions will be answered! Roll up!

Kyle Mahowald (@kmahowald) 's Twitter Profile Photo

Important early results on if speech-only philosophers can refer. Awaiting Tal’s future work on extending this paradigm to a multimodal @MMandelkern.

Christopher Potts (@chrisgpotts) 's Twitter Profile Photo

I've posted the practice run of my LSA keynote. My core claim is that LLMs can be useful tools for doing close linguistic analysis. I illustrate with a detailed case study, drawing on corpora, targeted evaluations, and causal intervention-based analyses: youtu.be/DBorepHuKDM

Quanta Magazine (@quantamagazine) 's Twitter Profile Photo

Large language models may not be so omnipotent after all. New research shows that LLMs, like humans, prefer to learn some linguistic patterns over others. Ben Brubaker reports: quantamagazine.org/can-ai-models-…

Julie Kallini ✨ @ ICLR 2025 ✈️ (@juliekallini) 's Twitter Profile Photo

"Mission: Impossible" was featured in Quanta Magazine! Big thank you to Ben Brubaker for the wonderful article covering our work on impossible languages. Ben was so thoughtful and thorough in all our conversations, and it really shows in his writing!

Richard Futrell (@rljfutrell) 's Twitter Profile Photo

New paper, with Kyle Mahowald , on LMs and linguistics that conveys our excitement about what the present moment means for linguistics. Some of the questions and our answers are summarized in the slide above. arxiv.org/abs/2501.17047

New paper, with <a href="/kmahowald/">Kyle Mahowald</a> , on LMs and linguistics that conveys our excitement about what the present moment means for linguistics. Some of the questions and our answers are summarized in the slide above. arxiv.org/abs/2501.17047
Kyle Mahowald (@kmahowald) 's Twitter Profile Photo

Most important thing we do here is trace some history, reminding us a line can be traced to modern neural nets/AI from basic science research. Not in engineering but in linguistics and cognitive science. Much of that fundamental early work funded by the (currently paused) NSF.

Most important thing we do here is trace  some  history, reminding us  a line can be traced to modern neural nets/AI from basic science research. Not in engineering but in linguistics and cognitive science. Much of that fundamental early work funded by the (currently paused) NSF.
Gašper Beguš (@begusgasper) 's Twitter Profile Photo

Kyle Mahowald Even Darwin's evolutionary theory was catalyzed by historical linguistics which introduced the model of ancestor language that no longer exist giving rise to several daughter languages decades before Darwin.

UBC Linguistics (@ubclinguistics) 's Twitter Profile Photo

Can AI Models Show Us How People Learn? Incoming UBC Linguistics Assistant Professor I Papadimitriou has something to say about this question quantamagazine.org/can-ai-models-…