Stephanie Chan (@scychan_brains) 's Twitter Profile
Stephanie Chan

@scychan_brains

Staff Research Scientist at Google DeepMind. Science of artificial & biological brains 🤖 🧠 And AI to empower humans 🤝 🌟 Views are my own.

ID: 1058203824605716481

calendar_today02-11-2018 03:47:07

640 Tweet

4,4K Followers

2,2K Following

Furong Huang (@furongh) 's Twitter Profile Photo

Everyone’s hyped about test-time scaling—more steps, longer traces, just add “Wait” or “Let me rethink,” and boom: better reasoning? Not quite. We find that performance almost always improves at first—then declines. Classic overthinking. That’s not news. But why does it happen?

Iason Gabriel (@iasongabriel) 's Twitter Profile Photo

This paper is absolutely essential reading for anyone interested in developing a science of AI safety and evaluation. I esp. appreciate the “principle of parsimony”: Behaviours should not be attributed to complex mental processes if simpler explanations are available ✅

Edward Grefenstette (@egrefen) 's Twitter Profile Photo

Do you have a PhD (or equivalent) or will have one in the coming months (i.e. 2-3 months away from graduating)? Do you want to help build open-ended agents that help humans do humans things better, rather than replace them? We're hiring 1-2 Research Scientists! Check the 🧵👇

Andrew Lampinen (@andrewlampinen) 's Twitter Profile Photo

Quick thread on the recent IMO results and the relationship between symbol manipulation, reasoning, and intelligence in machines and humans:

Stephanie Chan (@scychan_brains) 's Twitter Profile Photo

Important lessons on rigorous evaluation of AI model behaviors. Drawing on the historical example (and fun story) of hype around "chimps learning language". Given the importance of AI safety research, rigor and credibility is absolutely necessary. A great read from the folks at

Stephanie Chan (@scychan_brains) 's Twitter Profile Photo

While I'm promoting summerfieldlab @summerfieldlab.bsky.social's work anyway.. I highly recommend this paper on both the risks and opportunities of AI for democracy: arxiv.org/abs/2409.06729 It's exactly the kind of balanced analysis we need more of, but which unfortunately does not gain as much attention

Roberta Raileanu (@robertarail) 's Twitter Profile Photo

I’m building a new team at Google DeepMind to work on Open-Ended Discovery! We’re looking for strong Research Scientists and Research Engineers to help us push the frontier of autonomously discovering novel artifacts such as new knowledge, capabilities, or algorithms, in an

Furong Huang (@furongh) 's Twitter Profile Photo

NeurIPS rebuttal deadline is around the corner 😬 I’m not an expert, but thought I’d drop my two cents on how to write a good rebuttal, especially for folks writing their first few. Hope this helps someone! 🧵👇 (And please chime in with your own tips; let’s crowdsource the