David Mortensen (@dmort27) 's Twitter Profile
David Mortensen

@dmort27

I make colorless green GPUs sleep brriously @LTIatCMU. phonology • morphology • language change • #NLProc data resources.

ID: 47250870

linkhttp://cs.cmu.edu/~dmortens/ calendar_today15-06-2009 03:39:23

5,5K Tweet

2,2K Followers

1,1K Following

Andreas Opedal (@opedalandreas) 's Twitter Profile Photo

💥 New #EMNLP2024 main paper 💥 It’s pretty established by now that the in-context log probability of a word, called surprisal, is predictive of the time the word takes to read. But how important is contextual information, really? Summary below 🧵 arxiv.org/abs/2409.08160

💥 New #EMNLP2024 main paper 💥 It’s pretty established by now that the in-context log probability of a word, called surprisal, is predictive of the time the word takes to read. But how important is contextual information, really? Summary below 🧵
arxiv.org/abs/2409.08160
Joel Mire (@joel_mire) 's Twitter Profile Photo

I’m thrilled to be at EMNLP this week presenting our paper, “The Empirical Variability of Narrative Perceptions of Social Media Texts” I’ll be giving an oral presentation during the CSS + Cultural Analytics Session 2 (Nov 14). Paper: aclanthology.org/2024.emnlp-mai… 🧵(1/12)

I’m thrilled to be at EMNLP this week presenting our paper, “The Empirical Variability of Narrative Perceptions of Social Media Texts”

I’ll be giving an oral presentation during the CSS + Cultural Analytics Session 2 (Nov 14).

Paper: aclanthology.org/2024.emnlp-mai… 🧵(1/12)
Shinji Watanabe (@shinjiw_at_cmu) 's Twitter Profile Photo

We are thrilled to announce the Interspeech 2025 URGENT Challenge, starting on 11/15! Join us in building universal speech enhancement models to tackle in-the-wild speech data using large-scale, multilingual data. Details: urgent-challenge.github.io/urgent2025/

We are thrilled to announce the Interspeech 2025 URGENT Challenge, starting on 11/15! 
Join us in building universal speech enhancement models to tackle in-the-wild speech data using large-scale, multilingual data. Details: urgent-challenge.github.io/urgent2025/
Jason Eisner (@adveisner) 's Twitter Profile Photo

Who wants to come to JHU and do a postdoc with me?? I'm always enthusiastic about new modeling / inference / algorithmic ideas in NLP/ML. Also selected applications.

EMNLP 2025 (@emnlpmeeting) 's Twitter Profile Photo

#EMNLP2024 Best Paper 1/5: An image speaks a thousand words, but can everyone listen? On image transcreation for cultural relevance

#EMNLP2024 Best Paper 1/5: An image speaks a thousand words, but can everyone listen? On image transcreation for cultural relevance
Simran Khanuja (@simi_97k) 's Twitter Profile Photo

Thank you so much EMNLP 2025 for this wonderful recognition! I’m so honored and humbled 💕 Thanks Graham Neubig for your support throughout! We’ve been working on this for 1.5 years and everyone who has spoken with me in the recent past knows how passionately I feel about this

David Mortensen (@dmort27) 's Twitter Profile Photo

This is a really great paper on which we are building a number of other projects and was really deserving of this award. Congratuations, William Chen, Shinji Watanabe🗾🚀, and team!

So Yeon (Tiffany) Min on Industry Job Market (@soyeontiffmin) 's Twitter Profile Photo

I am on the industry job market, and am planning to interview around next March. I am attending NeurIPS Conference, and I hope to meet you there if you are hiring! My website: soyeonm.github.io Short bio about me: I am a 5th year PhD student at CMU MLD, working with Russ Salakhutdinov

Valentin Hofmann (@vjhofmann) 's Twitter Profile Photo

📢 New paper 📢 What generalization mechanisms shape the language skills of LLMs? Prior work has claimed that LLMs learn language via rules. We revisit the question and find that superficially rule-like behavior of LLMs can be traced to underlying analogical processes. 🧵

📢 New paper 📢

What generalization mechanisms shape the language skills of LLMs?

Prior work has claimed that LLMs learn language via rules.

We revisit the question and find that superficially rule-like behavior of LLMs can be traced to underlying analogical processes.

🧵
David Mortensen (@dmort27) 's Twitter Profile Photo

LLMs learn to generalize morphological derivation by analogy from exemplars rather than rules (which is probably also true of humans). However, their behavior is different from human morphological behavior in interesting ways. Read our paper to find out how.

Pranjal Aggarwal (@pranjalaggarw16) 's Twitter Profile Photo

LLMs often generate incorrect code. Instead, what if they can generate provably correct code? Presenting AlphaVerus: A self-reinforcing method that automatically learns to generate mathematically correct code using inference-time search and verifier feedback. 🧵