Magdalena Kachlicka @mkachlicka.bsky.social (@mkachlicka) 's Twitter Profile
Magdalena Kachlicka @mkachlicka.bsky.social

@mkachlicka

Postdoctoral Researcher @bbkpsychology @audioneurolab working on speech+sounds+brains 🧠 cognitive sci, auditory neuroscience, brain imaging, language, methods

ID: 494041705

linkhttps://mkachlicka.github.io/ calendar_today16-02-2012 13:19:22

2,2K Tweet

925 Followers

2,2K Following

Ekdeep Singh Lubana (@ekdeepl) 's Twitter Profile Photo

Paper alert—accepted as a NeurIPS *Spotlight*!🧵👇 We build on our past work relating emergence to task compositionality and analyze the *learning dynamics* of such tasks: we find there exist latent interventions that can elicit them much before input prompting works! 🤯

Edmund Lalor (@edmundlalor) 's Twitter Profile Photo

New lab preprint! This one has been a long time coming. On the generative mechanisms underlying the cortical tracking of natural speech: a position paper. osf.io/preprints/osf/… A brief thread below…

adam morgan (same handle at oo-blay eye-skay) (@adumbmoron) 's Twitter Profile Photo

🧠🗞️🗣️Finally out! Paper with a way-too-long name for social media. How does the brain turn words into sentences? We tracked words in participants' brains while they produced sentences, and found some unexpectedly neat patterns. 🧵1/9 rdcu.be/epA1J in Communications Psychology

🧠🗞️🗣️Finally out! Paper with a way-too-long name for social media. How does the brain turn words into sentences? We tracked words in participants' brains while they produced sentences, and found some unexpectedly neat patterns. 🧵1/9
rdcu.be/epA1J in <a href="/CommsPsychol/">Communications Psychology</a>
Yike Wang (@yikewang_) 's Twitter Profile Photo

LLMs are helpful for scientific research — but will they continuously be helpful? Introducing 🔍ScienceMeter: current knowledge update methods enable 86% preservation of prior scientific knowledge, 72% acquisition of new, and 38%+ projection of future (arxiv.org/abs/2505.24302).

LLMs are helpful for scientific research — but will they continuously be helpful?

Introducing 🔍ScienceMeter: current knowledge update methods enable 86% preservation of prior scientific knowledge, 72% acquisition of new, and 38%+ projection of future (arxiv.org/abs/2505.24302).
Kento Nishi|AI Researcher, LiveTL+HyperChat Dev🐔 (@kento_nishi) 's Twitter Profile Photo

🚨 ICML 2025 Paper! 🚨 Excited to announce "Representation Shattering in Transformers: A Synthetic Study with Knowledge Editing." 🔗 arxiv.org/abs/2410.17194 We uncover a new phenomenon, Representation Shattering, to explain why KE edits negatively affect LLMs' reasoning. 🧵👇

🚨 ICML 2025 Paper! 🚨

Excited to announce "Representation Shattering in Transformers: A Synthetic Study with Knowledge Editing."

🔗 arxiv.org/abs/2410.17194

We uncover a new phenomenon, Representation Shattering, to explain why KE edits negatively affect LLMs' reasoning.

🧵👇
Joseph Imperial (@josephimperial_) 's Twitter Profile Photo

🚨 New global collaboration & dataset paper! UniversalCEFR: Enabling Open Multilingual Research on Language Proficiency Assessment 🌍 We introduce UniversalCEFR, an initiative to build a growing, open, multilingual, and multidimensional resource for CEFR-based language

🚨 New global collaboration &amp; dataset paper!

UniversalCEFR: Enabling Open Multilingual Research on Language Proficiency Assessment 🌍

We introduce UniversalCEFR, an initiative to build a growing, open, multilingual, and multidimensional resource for CEFR-based language
Martin Hebart (@martin_hebart) 's Twitter Profile Photo

Great work by Changde Du from Huiguang He's lab at the Chinese Academy of Sciences. How similar are visual and conceptual representations in (multimodal) large language models to those found in humans? It turns out quite similar! nature.com/articles/s4225…

Rosario Tomasello (@tomasel9) 's Twitter Profile Photo

👉New publication on the topic of #experimentalpragmatics! The role of #intonation in recognizing speaker´s #communicativefunction! Rising vs falling pitch!🎙️ Work led by Caterina Villani (she/her) with @isabellaboux.bsky.social and #FPulvermüller shorturl.at/NDXFM Brain Language Lab

Omer Moussa (@ohmoussa2) 's Twitter Profile Photo

🚨Excited to share our latest work published at Interspeech 2025: “Brain-tuned Speech Models Better Reflect Speech Processing Stages in the Brain”! 🧠🎧 arxiv.org/abs/2506.03832 W/ Mariya Toneva We fine-tuned speech models directly with brain fMRI data, making them more brain-like.🧵

Nature Human Behaviour (@naturehumbehav) 's Twitter Profile Photo

In this Article, Masahiro Yamashita et al. explore how conversational content is represented in the brain, revealing brain activity patterns with contrasting timescales for speech production and comprehension. nature.com/articles/s4156…

João Veríssimo (@_joaoverissimo) 's Twitter Profile Photo

New paper with Chao Zhou in BLC: doi.org/10.1017/S13667… What makes lexical tones challenging for L2 learners? Previous studies suggest that phonological universals are at play... In our perceptual study, we found little evidence for these universals.

Lenny van Dyck (@levandyck) 's Twitter Profile Photo

How is high-level visual cortex organized? In a new preprint with Martin Hebart & Katharina Dobs, we show that category-selective areas encode a rich, multidimensional feature space 🌈 biorxiv.org/content/10.110… 🧵 1/n

A.Benítez-Burraco (@abenitezburraco) 's Twitter Profile Photo

At urban sites, animals experience an increasingly noisy background of sound, which poses challenges to efficient communication nature.com/articles/s4155…

At urban sites, animals experience an increasingly noisy background of sound, which poses challenges to efficient communication
nature.com/articles/s4155…
Yuki Kamitani (@ykamit) 's Twitter Profile Photo

Can we hear what's inside your head? 🧠→🎶 Our new paper in PLOS Biology, led by Jong-Yun Park, presents an AI-based method to reconstruct arbitrary natural sounds directly from a person's brain activity measured with fMRI. doi.org/10.1371/journa…

ChangLabUCSF (@changlabucsf) 's Twitter Profile Photo

Check out our latest work developing speech decoding systems that maintain their specificity to volitional speech attempts, co-led by Alex Silva and Jessie R. Liu iopscience.iop.org/article/10.108…