Silin Gao (@silin_gao) 's Twitter Profile
Silin Gao

@silin_gao

PhD @ICepfl NLP Lab, Advisor @ABosselut | Intern @TsinghuaCoAI @Zhou_Yu_AI | Prev @Tsinghua_Uni | Knowledge Intensive #NLProc | Dialogue Systems | #AI

ID: 1439746077108867078

linkhttps://silin159.github.io/SilinGao/ calendar_today20-09-2021 00:20:59

46 Tweet

273 Followers

83 Following

Mete (@mismayilsoy) 's Twitter Profile Photo

Are LLMs linguistically productive and systematic in morphologically-rich languages as good as humans? No 🤨 Our new NAACL 2025 paper (arxiv.org/abs/2410.12656) reveals a significant performance gap between LLMs and humans in linguistic creativity and morphological generalization.

Badr AlKhamissi (@bkhmsi) 's Twitter Profile Photo

🚨 New Preprint!! LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignment—linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. 👇🧵

🚨 New Preprint!!

LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this alignment—linguistic structure or world knowledge? And how does this alignment evolve during training? Our new paper explores these questions. 👇🧵
Zeming Chen (@eric_zemingchen) 's Twitter Profile Photo

🗒️Can we meta-learn test-time learning to solve long-context reasoning? Our latest work, PERK, learns to encode long contexts through gradient updates to a memory scratchpad at test time, achieving long-context reasoning robust to complexity and length extrapolation while

🗒️Can we meta-learn test-time learning to solve long-context reasoning?

Our latest work, PERK, learns to encode long contexts through gradient updates to a memory scratchpad at test time, achieving long-context reasoning robust to complexity and length extrapolation while