
Phoebe Chen
@phoebehchen
graduate student at NYU psychology with David Poeppel. studying semantic composition.
🌈
ID: 1241102530408599553
20-03-2020 20:41:22
36 Tweet
116 Followers
193 Following

New manuscript by Viktor Kewenig Prof. Gabriella Vigliocco The LAB Lab 🏳️🌈🏳️⚧️ shows that context can change whether a word is processed as abstract or concrete in the brain: doi.org/10.1101/2022.0… Ecological Brain

New blog post: Collective Intelligence for Deep Learning Recently, Yujin Tang and I published a paper about how ideas like swarm behavior, self-organization, emergence are gaining traction in deep learning. I wrote a blog post summarizing the key ideas: blog.otoro.net/2022/10/01/col…

Very excited to present this #ECoG work at SNL Annual Meeting today. Swing by if you’re here and interested in sentences/lexical access - poster C47 at 10:15! Some of the cleanest results I’ve ever gotten. Work with Adeen Flinker 🇮🇱🇺🇦🎗️ #snl2022 #psycholinguisitcs #neuroscience




New paper with Ted Gibson, Language Lab MIT Roger Levy Richard Futrell on the comprehension of recursive syntax in language. pnas.org/doi/10.1073/pn… 🧵1/


New work from Me, Tal Linzen, and @linguistbrian to appear at CoNLL: arxiv.org/abs/2210.12187 Q: LM surprisal underestimates garden path effects. Is surprisal a bad theory of processing, or are LM estimates of surprisal just misaligned with human prediction? 🧵below:


Happy to share the preprint of our latest work “Amplitude modulation perceptually distinguishes music and speech” David Poeppel Xiangbin Teng M. Florencia Assaneo psyarxiv.com/juzrh/


Transformers are limited in their ability to generalize over long temporal dependencies. In our NeurIPS Conference paper (#MemARI), we propose a framework for effectively integrating findings from human memory into LMs / transformers. (1/2) arxiv.org/abs/2210.01869


Another day, another opinion essay about ChatGPT in the The New York Times. This time, Noam Chomsky and colleagues weigh in on the shortcomings of language models. Unfortunately, this is not the nuanced discussion one could have hoped for. 🧵 1/ nytimes.com/2023/03/08/opi…

What’s a 𝑓𝑎𝑠ℎ𝑖𝑜𝑛 𝑑𝑜𝑛𝑘𝑒𝑦? In our new study, Phoebe Chen, David Poeppel, and I investigate how humans utilize lexical-semantic features to interpret novel noun-noun compounds and evaluate the behavior of large language models. Check it out 🫏: tinyurl.com/42tbdxcr

excited to release our large dataset on memory for narratives!🧠💭 The Naturalistic Free Recall Dataset: four stories, hundreds of participants, and high-fidelity transcriptions paper: doi.org/10.31234/osf.i… w/ Phoebe Chen (co-first) todd gureckis David Poeppel Vy Vo



This dataset is now out in Scientific Data, following seriously valuable comments from Sam Nastase (shorturl.at/Id6Gb) and one anonymous reviewer. We hope it facilitates new insights into human memory under naturalistic conditions! nature.com/articles/s4159…