Louis Kirsch (@louiskirschai) 's Twitter Profile
Louis Kirsch

@louiskirschai

Driving the automation of AI Research. Research Scientist @GoogleDeepMind. PhD @SchmidhuberAI. @UCL, @HPI_DE alumnus. All opinions are my own.

ID: 416132515

linkhttp://louiskirsch.com calendar_today19-11-2011 08:25:30

312 Tweet

1,1K Followers

779 Following

Thomas Miconi (@thomasmiconi) 's Twitter Profile Photo

To start the new year (🥳) I'd like to highlight 2 recent papers that ask essentially the same question, but from very different perspectives: When learning many things at the same time, when and how does rote memorization turn into meta-learning - i.e. "learning-to-learn" ?

To start the new year (🥳) I'd like to highlight 2 recent papers  that ask essentially the same question, but from very different perspectives:

When learning many things at the same time, when and how does rote memorization turn into meta-learning - i.e. "learning-to-learn" ?
Jacob Beck (@jakeabeck) 's Twitter Profile Photo

Excited to share our new survey paper of meta-RL! 📊🤖🎊 arxiv.org/abs/2301.08028 Many thanks to my co-authors for the hard work, Risto Vuorio, Evan Liu, Zheng Xiong, Luisa Zintgraf, Chelsea Finn, Shimon Whiteson Highlights in the thread below!

Excited to share our new survey paper of meta-RL!
📊🤖🎊

arxiv.org/abs/2301.08028

Many thanks to my co-authors for the hard work, <a href="/ristovuorio/">Risto Vuorio</a>, Evan Liu, Zheng Xiong, <a href="/luisa_zintgraf/">Luisa Zintgraf</a>, <a href="/chelseabfinn/">Chelsea Finn</a>, <a href="/shimon8282/">Shimon Whiteson</a>

Highlights in the thread below!
David Finsterwalder (@dfinsterwalder) 's Twitter Profile Photo

But looking deeper into GPT and its capacity for in-context learning (ICL) is fascinating. Recent works on ICL (like this) made me much more curious about language modeling and transformers (+ the success of transformers in computer vision). 5/7 x.com/LouisKirschAI/…

Samuel Schmidgall (@srschmidgall) 's Twitter Profile Photo

There is still a lot we can learn from the brain in artificial intelligence. In our new review article, we delve into the mechanisms of the brain that inspired artificial intelligence algorithms, as well as brain-inspired learning algorithms in AI🧠 arxiv.org/abs/2305.11252…

There is still a lot we can learn from the brain in  artificial intelligence. In our new review article, we delve into the mechanisms of the brain that inspired artificial intelligence algorithms, as well as brain-inspired learning algorithms in AI🧠

arxiv.org/abs/2305.11252…
hardmaru (@hardmaru) 's Twitter Profile Photo

Amazing that Jürgen Schmidhuber gave this talk back in 2012, months before AlexNet paper was published. In 2012, many things he discussed, people just considered to be funny and a joke, but the same talk now would be considered at the center of AI debate and controversy. Full talk:

Anand Gopalakrishnan (@agopal42) 's Twitter Profile Photo

Excited to present “Contrastive Training of Complex-valued Autoencoders for Object Discovery“ at #NeurIPS2023. TL;DR -- We introduce architecture changes and a new contrastive training objective that greatly improve the state-of-the-art synchrony-based model. Explainer thread 👇:

Excited to present “Contrastive Training of Complex-valued Autoencoders for Object Discovery“ at #NeurIPS2023. TL;DR -- We introduce architecture changes and a new contrastive training objective that greatly improve the state-of-the-art synchrony-based model. Explainer thread 👇:
Matthew Jackson (@jacksonmattt) 's Twitter Profile Photo

Meta-learning can discover RL algorithms with novel modes of learning, but how can we make them adapt to any training horizon? Introducing our #ICLR2024 work on discovering *temporally-aware* RL algorithms! Work co-led with Chris Lu, in Foerster Lab for AI Research and WhiRL

Laura Ruis (@lauraruis) 's Twitter Profile Photo

Revisiting Louis Kirsch et al.’s general-purpose ICL by meta-learning paper and forgot how great it is. It's rare to be taken along on the authors' journey to understand the phenomenon they document like this. More toy dataset papers should follow this structure.