noahdgoodman (@noahdgoodman) 's Twitter Profile
noahdgoodman

@noahdgoodman

Professor of natural and artificial intelligence @Stanford. Alignment at @GoogleDeepMind. (@StanfordNLP @StanfordAILab etc)

ID: 1193894314566307841

calendar_today11-11-2019 14:12:49

252 Tweet

4,4K Followers

156 Following

Gabriel Poesia (@gabrielpoesia) 's Twitter Profile Photo

If you're at NeurIPS, come tomorrow for the Oral+Poster on "Learning Formal Mathematics from Intrinsic Motivation"! Really fun work with David Broman Nick Haber noahdgoodman that put together much of what I did in the past years, w/ a new twist with open-ended learning!

Philipp (@jphilipp95) 's Twitter Profile Photo

Presenting this tomorrow at NeurIPS Conference East Exhibit Hall A-C #2111 (4:30 p.m. PST — 7:30 p.m. PST). Come along if you want to chat about synthetic preference data with Kanishk Gandhi

noahdgoodman (@noahdgoodman) 's Twitter Profile Photo

Congrats to OAI on producing a reasoning model! Their opaque tweets demonstrate that they’ve (independently) found some of the core ideas that we did on our way to STaR.

Elicit (@elicitorg) 's Twitter Profile Photo

We raised a $22M Series A and are launching Elicit Reports, a better version of Deep Research for actual researchers. Elicit Reports are available for everyone to try right now, for free. 👇

noahdgoodman (@noahdgoodman) 's Twitter Profile Photo

“Four habits of highly effective STaRs” — we show that certain high level cognitive behaviors are necessary for learning to reason through RL. Exciting!

Michael C. Frank (@mcxfrank) 's Twitter Profile Photo

AI models are fascinating, impressive, and sometimes problematic. But what can they tell us about the human mind? In a new review paper, noahdgoodman and I discuss how modern AI can be used for cognitive modeling: osf.io/preprints/psya…

AI models are fascinating, impressive, and sometimes problematic. But what can they tell us about the human mind?

In a new review paper, <a href="/noahdgoodman/">noahdgoodman</a> and I discuss how modern AI can be used for cognitive modeling: osf.io/preprints/psya…
Taylor Sorensen (@ma_tay_) 's Twitter Profile Photo

🤔🤖Most AI systems assume there’s just one right answer—but many tasks have reasonable disagreement. How can we better model human variation? 🌍✨ We propose modeling at the individual-level using open-ended, textual value profiles! 🗣️📝 arxiv.org/abs/2503.15484 (1/?)

🤔🤖Most AI systems assume there’s just one right answer—but many tasks have reasonable disagreement. How can we better model human variation? 🌍✨

We propose modeling at the individual-level using open-ended, textual value profiles! 🗣️📝

arxiv.org/abs/2503.15484
(1/?)
Rohin Shah (@rohinmshah) 's Twitter Profile Photo

Just released GDM’s 100+ page approach to AGI safety & security! (Don’t worry, there’s a 10 page summary.) AGI will be transformative. It enables massive benefits, but could also pose risks. Responsible development means proactively preparing for severe harms before they arise.

Just released GDM’s 100+ page approach to AGI safety &amp; security! (Don’t worry, there’s a 10 page summary.)

AGI will be transformative. It enables massive benefits, but could also pose risks. Responsible development means proactively preparing for severe harms before they arise.
LawZero - LoiZéro (@lawzero_) 's Twitter Profile Photo

Every frontier AI system should be grounded in a core commitment: to protect human joy and endeavour. Today, we launch LawZero - LoiZéro, a nonprofit dedicated to advancing safe-by-design AI. lawzero.org

Sydney Levine (@sydneymlevine) 's Twitter Profile Photo

🔥 New position piece! 🔥 In this paper we lay out our vision for AI Alignment as guided by "Resource Rational Contractualism" (RRC). But wait -- what's that? A 🧵.

noahdgoodman (@noahdgoodman) 's Twitter Profile Photo

It turns out that a lot of the most interesting behavior of LLMs can be explained without knowing anything about architecture or learning algorithms. Here we predict the rise (and fall) of in-context learning using hierarchical Bayesian methods.