Greta Tuckute (@gretatuckute) 's Twitter Profile
Greta Tuckute

@gretatuckute

Studying language in biological brains and artificial ones at MIT Brain and Cognitive Sciences.

ID: 350481180

linkhttp://tuckute.com calendar_today07-08-2011 21:06:44

681 Tweet

2,2K Followers

635 Following

Aran Nayebi (@aran_nayebi) 's Twitter Profile Photo

1/ 🧵👇 What should count as a good model of intelligence? AI is advancing rapidly, but how do we know if it captures intelligence in a scientifically meaningful way? We propose the *NeuroAI Turing Test*—a benchmark that evaluates models based on both behavior and internal

1/ 🧵👇
What should count as a good model of intelligence?

AI is advancing rapidly, but how do we know if it captures intelligence in a scientifically meaningful way?

We propose the *NeuroAI Turing Test*—a benchmark that evaluates models based on both behavior and internal
McGovern Institute (@mcgovernmit) 's Twitter Profile Photo

#TalkTuesday - Language may be a defining characteristic of our species, but Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦 provides compelling evidence that it's not required for complex thought.MIT Brain and Cognitive Sciences 🗣️ Watch the MIT Science talk here: youtu.be/woF3jjIupDU?si…

Gasser Elbanna (@gasser_elbanna) 's Twitter Profile Photo

Tomorrow at Frontiers in NeuroAI symposium Kempner Institute at Harvard University, I will be presenting a poster entitled "A Model of Continuous Phoneme Recognition Reveals the Role of Context in Human Speech Perception" (Poster #17). Work done with Josh McDermott #NeuroAI2025 🧵1/4

Aran Nayebi (@aran_nayebi) 's Twitter Profile Photo

Our first NeuroAgent! 🐟🧠 Excited to share new work led by the talented Reece Keller, showing how autonomous behavior and whole-brain dynamics emerge naturally from intrinsic curiosity grounded in world models and memory. Some highlights: - Developed a novel intrinsic drive

John Hewitt (@johnhewtt) 's Twitter Profile Photo

I wrote a note on linear transformations and symbols that traces a common conversation/interview I've had with students. Outer products, matrix rank, eigenvectors, linear RNNs -- the topics are really neat, and lead to great discussions of intuitions. cs.columbia.edu/~johnhew//fun-…

Guy Gaziv (@ggaziv) 's Twitter Profile Photo

Can we precisely and noninvasively modulate deep brain activity just by riding the natural visual feed? 👁️🧠 In our new preprint, we use brain models to craft subtle image changes that steer deep neural populations in primate IT cortex. Just pixels. 📝arxiv.org/abs/2506.05633

Omer Moussa (@ohmoussa2) 's Twitter Profile Photo

🚨Excited to share our latest work published at Interspeech 2025: “Brain-tuned Speech Models Better Reflect Speech Processing Stages in the Brain”! 🧠🎧 arxiv.org/abs/2506.03832 W/ Mariya Toneva We fine-tuned speech models directly with brain fMRI data, making them more brain-like.🧵

Judy Fan (@judyefan) 's Twitter Profile Photo

Super stoked for our Minds in the Making workshop at @cogscisociety.bsky.social 2025! If you are at all interested in the intersection between cognitive science and design, you won’t want to miss it!! 🧠🛠️

Naomi Saphra hiring a lab 🧈🪰 (@nsaphra) 's Twitter Profile Photo

ACL paper alert! What structure is lost when using linearizing attribution like Shapley? We show the nonlinear interactions between features reflect structures described by the sciences of syntax, semantics, and phonology.

ACL paper alert! What structure is lost when using linearizing attribution like Shapley? We show the nonlinear interactions between features reflect structures described by the sciences of syntax, semantics, and phonology.
Phillip Isola (@phillip_isola) 's Twitter Profile Photo

Our computer vision textbook is now available for free online here: visionbook.mit.edu We are working on adding some interactive components like search and (beta) integration with LLMs. Hope this is useful and feel free to submit Github issues to help us improve the text!

Badr AlKhamissi (@bkhmsi) 's Twitter Profile Photo

🚨New Preprint!! Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge. 1/ 🧵👇

🚨New Preprint!!

Thrilled to share with you our latest work: “Mixture of Cognitive Reasoners”, a modular transformer architecture inspired by the brain’s functional networks: language, logic, social reasoning, and world knowledge.

1/ 🧵👇
Kempner Institute at Harvard University (@kempnerinst) 's Twitter Profile Photo

The Kempner Institute congratulates its research fellows Isabel Papadimitriou (Isabel Papadimitriou) and Jenn Hu (Jennifer Hu) for their faculty appointments (UBC Linguistics & JHU Cognitive Science) and celebrates their innovative research. Read more here: bit.ly/448heBy #AI #LLMs

Tal Linzen (@tallinzen) 's Twitter Profile Photo

I'm hiring at least one post-doc! We're interested in creating language models that process language more like humans than mainstream LLMs do, through architectural modifications and interpretability-style steering.

Tatiana Engel (@engeltatiana) 's Twitter Profile Photo

Out today in nature: we show that individual neurons have diverse tuning to a decision variable computed by the entire population, revealing a unifying geometric principle for the encoding of sensory and dynamic cognitive variables. nature.com/articles/s4158…

Out today in <a href="/Nature/">nature</a>: we show that individual neurons have diverse tuning to a decision variable computed by the entire population, revealing a unifying geometric principle for the encoding of sensory and dynamic cognitive variables.
nature.com/articles/s4158…
BlackboxNLP (@blackboxnlp) 's Twitter Profile Photo

Have you heard about our shared task? 📢 Mechanistic Interpretability (MI) is quickly advancing, but comparing methods remains a challenge. This year, as a part of #BlackboxNLP at EMNLP 2025, we're introducing a shared task to rigorously evaluate MI methods in LMs 🧵

Have you heard about our shared task? 📢

Mechanistic Interpretability (MI) is quickly advancing, but comparing methods remains a challenge. 
This year, as a part of #BlackboxNLP at <a href="/emnlpmeeting/">EMNLP 2025</a>, we're introducing a shared task to rigorously evaluate MI methods in LMs 🧵
Ching Fang (chingfang.bsky.social) (@chingfang17) 's Twitter Profile Photo

Humans and animals can rapidly learn in new environments. What computations support this? We study the mechanisms of in-context reinforcement learning in transformers, and propose how episodic memory can support rapid learning. Work w/ Kanaka Rajan: arxiv.org/abs/2506.19686

Ekdeep Singh Lubana (@ekdeepl) 's Twitter Profile Photo

🚨New paper! We know models learn distinct in-context learning strategies, but *why*? Why generalize instead of memorize to lower loss? And why is generalization transient? Our work explains this & *predicts Transformer behavior throughout training* without its weights! 🧵 1/

Fenil Doshi (@fenildoshi009) 's Twitter Profile Photo

🧵 What if two images have the same local parts but represent different global shapes purely through part arrangement? Humans can spot the difference instantly! The question is can vision models do the same? 1/15