
Greta Tuckute
@gretatuckute
Studying language in biological brains and artificial ones at MIT Brain and Cognitive Sciences.
ID: 350481180
http://tuckute.com 07-08-2011 21:06:44
681 Tweet
2,2K Followers
635 Following


#TalkTuesday - Language may be a defining characteristic of our species, but Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦 provides compelling evidence that it's not required for complex thought.MIT Brain and Cognitive Sciences 🗣️ Watch the MIT Science talk here: youtu.be/woF3jjIupDU?si…

Come to our CCN workshop! Blending iophysical constraints and neural networks. Topographical ANNs + and much more. Meenakshi Khosla talia konkle pouya bashivan Nick Blauch Tim Kietzmann Laura Gwilliams Apurva Ratan Murty Jakob Macke Pieter Roelfsema Martin Schrimpf Andrew Miri Nabil Imam

Tomorrow at Frontiers in NeuroAI symposium Kempner Institute at Harvard University, I will be presenting a poster entitled "A Model of Continuous Phoneme Recognition Reveals the Role of Context in Human Speech Perception" (Poster #17). Work done with Josh McDermott #NeuroAI2025 🧵1/4

Our first NeuroAgent! 🐟🧠 Excited to share new work led by the talented Reece Keller, showing how autonomous behavior and whole-brain dynamics emerge naturally from intrinsic curiosity grounded in world models and memory. Some highlights: - Developed a novel intrinsic drive



🚨Excited to share our latest work published at Interspeech 2025: “Brain-tuned Speech Models Better Reflect Speech Processing Stages in the Brain”! 🧠🎧 arxiv.org/abs/2506.03832 W/ Mariya Toneva We fine-tuned speech models directly with brain fMRI data, making them more brain-like.🧵





The Kempner Institute congratulates its research fellows Isabel Papadimitriou (Isabel Papadimitriou) and Jenn Hu (Jennifer Hu) for their faculty appointments (UBC Linguistics & JHU Cognitive Science) and celebrates their innovative research. Read more here: bit.ly/448heBy #AI #LLMs



Have you heard about our shared task? 📢 Mechanistic Interpretability (MI) is quickly advancing, but comparing methods remains a challenge. This year, as a part of #BlackboxNLP at EMNLP 2025, we're introducing a shared task to rigorously evaluate MI methods in LMs 🧵


Humans and animals can rapidly learn in new environments. What computations support this? We study the mechanisms of in-context reinforcement learning in transformers, and propose how episodic memory can support rapid learning. Work w/ Kanaka Rajan: arxiv.org/abs/2506.19686


