Sathvik (@sathvikn4) 's Twitter Profile
Sathvik

@sathvikn4

computational psycholinguistics @ umd, nsf grfp fellow | he/him | fmr: cognitive & computer science @ uc berkeley | also on bluesky (sathvik@)

ID: 931956273540501504

linkhttps://sathvikn.github.io/ calendar_today18-11-2017 18:44:25

1,1K Tweet

803 Followers

1,1K Following

Dayeon (Zoey) Ki (@zoeykii) 's Twitter Profile Photo

1/ How can a monolingual English speaker 🇺🇸 decide if a French translation 🇫🇷 is good enough to be shared? Introducing ❓AskQE❓, an #LLM-based Question Generation + Answering framework that detects critical MT errors and provides actionable feedback 🗣️ #ACL2025

1/ How can a monolingual English speaker 🇺🇸 decide if a French translation 🇫🇷 is good enough to be shared? 

Introducing ❓AskQE❓, an #LLM-based Question Generation + Answering framework that detects critical MT errors and provides actionable feedback 🗣️ 

#ACL2025
Kayo Yin (@kayo_yin) 's Twitter Profile Photo

Happy to announce the first workshop on Pragmatic Reasoning in Language Models — PragLM @ COLM 2025! 🧠🎉 How do LLMs engage in pragmatic reasoning, and what core pragmatic capacities remain beyond their reach? 🌐 sites.google.com/berkeley.edu/p… 📅 Submit by June 23rd

Byung-Doh Oh (@byungdoh) 's Twitter Profile Photo

Subword token granularity influences 1) word probabilities at initialization and 2) token co-occurrences during training. How does this impact LM surprisal? We tackled this using Mamba-2 models, which can be flexibly trained on sequences of different lengths. #ACL2025 #ACL2025NLP

Subword token granularity influences 1) word probabilities at initialization and 2) token co-occurrences during training. How does this impact LM surprisal? We tackled this using Mamba-2 models, which can be flexibly trained on sequences of different lengths. #ACL2025 #ACL2025NLP
Byung-Doh Oh (@byungdoh) 's Twitter Profile Photo

Is the slowdown in reading at low-frequency words separable from predictability effects? Studies using LM surprisal like doi.org/10.1162/opmi_a… suggest so. Our new JML paper doi.org/10.1016/j.jml.… instead argues it may depend on the LM, which is itself sensitive to frequency.

Dr Shiyu He (@shiyuhe1) 's Twitter Profile Photo

Schotter, E.R., Dillon, B. A beginner’s guide to eye tracking for psycholinguistic studies of reading. Behav Res 57, 68 (2025). doi.org/10.3758/s13428…

Schotter, E.R., Dillon, B. A beginner’s guide to eye tracking for psycholinguistic studies of reading. Behav Res 57, 68 (2025). doi.org/10.3758/s13428…
UMD CLIP Lab (@clipumd) 's Twitter Profile Photo

Researchers Univ. of Maryland are examining how and why some misleading narratives proliferate via strategic use of certain mainstream news articles. The paper—coauthored by Pranav Goel, Jon Green, 🇺🇦 [email protected] & Philip Resnik—was published Nature Human Behaviour. nature.com/articles/s4156…

Isabel Papadimitriou (@isabelpapad) 's Twitter Profile Photo

Check out our ACL paper! We use shapley interactions to see which words (and phones) interact non-linearly -- what we lose when we assume linear relationships between features. Chat to Diganta in Vienna!

Harvey Yiyun Fu (@harveyiyun) 's Twitter Profile Photo

LLMs excel at finding surprising “needles” in very long documents, but can they detect when information is conspicuously missing? 🫥AbsenceBench🫥 shows that even SoTA LLMs struggle on this task, suggesting that LLMs have trouble perceiving “negative space” in documents. paper:

LLMs excel at finding surprising “needles” in very long documents, but can they detect when information is conspicuously missing?

🫥AbsenceBench🫥 shows that even SoTA LLMs struggle on this task, suggesting that LLMs have trouble perceiving “negative space” in documents.

paper:
Kanishka Misra 🌊 (@kanishkamisra) 's Twitter Profile Photo

I will unfortunately have to skip SCiL this year, but I am thrilled to share that Jwalanthi will be presenting this work by her, Rohan Jha, me, and Kyle Mahowald on a tool that allows you to project contextualized embeddings from LMs to interpretable semantic spaces!

I will unfortunately have to skip SCiL this year, but I am thrilled to share that Jwalanthi will be presenting this work by her, <a href="/Robro612/">Rohan Jha</a>, me, and <a href="/kmahowald/">Kyle Mahowald</a>  on a tool that allows you to project contextualized embeddings from LMs to interpretable semantic spaces!
Philip Resnik (@psresnik) 's Twitter Profile Photo

To be presented at ACL 2025: Large Language Models Are Biased Because They Are Large Language Models. Article: doi.org/10.1162/coli_a… Short (8min) video: youtube.com/watch?v=WLSuhe… #ACL2025NLP #NLProc #LLMs

Rupak (@rupak_53) 's Twitter Profile Photo

Linguistic theory tells us that common ground is essential to conversational success. But to what extent is it essential? Can LLMs detect when humans lose common ground in conversation? Our ACL 2025 (Oral) paper explores these questions on real-world data. #ACL2025NLP #ACL2025

Linguistic theory tells us that common ground is essential to conversational success. But to what extent is it essential? Can LLMs detect when humans lose common ground in conversation?

Our ACL 2025 (Oral) paper explores these questions on real-world data.

#ACL2025NLP #ACL2025
Dallas Card (@dallascard) 's Twitter Profile Photo

I am delighted to share our new #PNAS paper, with Gaurav Kamath, Michelle Yang Siva Reddy and Morgan Sonderegger, looking at whether age matters for the adoption of new meanings. That is, as words change meaning, does the rate of adoption vary across generations? pnas.org/doi/10.1073/pn…

I am delighted to share our new #PNAS paper, with Gaurav Kamath, <a href="/mjieyang/">Michelle Yang</a> <a href="/sivareddyg/">Siva Reddy</a> and <a href="/masonderegger/">Morgan Sonderegger</a>, looking at whether age matters for the adoption of new meanings. That is, as words change meaning, does the rate of adoption vary across generations? pnas.org/doi/10.1073/pn…
Neha Srikanth (@nehasrikanth) 's Twitter Profile Photo

When questions are poorly posed, how do humans vs. models handle them? Our #ACL2025 paper explores this + introduces a framework for detecting and analyzing poorly-posed information-seeking questions! Joint work with Jordan Boyd-Gräber befindet sich in Wien (ACL2025) & Rachel Rudinger! 🔗 aclanthology.org/2025.acl-long.…

When questions are poorly posed, how do humans vs. models handle them? Our #ACL2025 paper explores this + introduces a framework for detecting and analyzing poorly-posed information-seeking questions! 

Joint work with <a href="/boydgraber/">Jordan Boyd-Gräber befindet sich in Wien (ACL2025)</a> &amp; <a href="/rachelrudinger/">Rachel Rudinger</a>!

🔗 aclanthology.org/2025.acl-long.…
Sathvik (@sathvikn4) 's Twitter Profile Photo

Friends at Cogsci, I’ll be in SF tomorrow and would love to catch up if you’re there! I should be around from lunch onwards, my DMs/texts/emails are open.

Shota Momma (@shota_momma) 's Twitter Profile Photo

A paper with Vic Ferreira and Norvin Richards is now out in JML! (1) Speakers syntactically encode zero complementizers as cognitively active mental object. (2) No evidence LLMs capture cross constructional generalizations about zero complementizers. nam10.safelinks.protection.outlook.com/?url=https%3A%…