
Daniela Amodei
@danielaamodei
President @AnthropicAI. Formerly @OpenAI, @Stripe, congressional staffer, global development
ID: 373531234
http://anthropic.com 14-09-2011 19:02:13
29 Tweet
8,8K Followers
289 Following





In our second interpretability paper, we revisit “induction heads”. In 2+ layer transformers these pattern-completion heads form exactly when in-context learning abruptly improves. Are they responsible for most in-context learning in large transformers? transformer-circuits.pub/2022/in-contex…



Glad Quanta Magazine highlights progress on induction heads/rigorous interpretability by Chris Olah, Catherine Olsson, @nelhage and others Anthropic. More to come! quantamagazine.org/researchers-gl…



