
Edan Toledo
@edantoledo
PhD Student @AIatMeta & @UCL • Prev RE @InstaDeepAI • MPhil ACS @Cambridge_Uni • Reinforcement Learning • 🇿🇦🇬🇧
ID: 1575449453896646656
29-09-2022 11:36:50
33 Tweet
81 Followers
80 Following


If you haven't yet, please check out the amazing works from the JAX community. E.g. environments: Brax (Olivier Bachem), Gymnax (Robert Lange), JaxMARL (Alex Rutherford), Craftax (Michael Matthews), Pgx (Sotetsu K.(そーてつ)). And algorithms: PureJaxRL (Chris Lu), Stoix/Flashbax (Edan Toledo).

Curious about this diagram? Join us later today as we discuss growing the MARL ecosystem in JAX! 🤖🍿 InstaDeep Ruan de Kock Omayma Mahjoub Sasha @formanek_claude (& for a sneak preview: arxiv.org/abs/2107.01460 😉)



Excited to introduce our latest neural solver, MEMENTO! Enhancing problem-specific adaptation with an explicit memory. Thanks to my InstaDeep collaborators: Refiloe 🇱🇸, Noah🇿🇦, Arnu Pretorius🇷🇼, Tom Barrett 🇬🇧, Nathan Grinsztajn🇬🇧! arxiv.org/abs/2406.16424 🧵[1/9]
![Felix Chalumeau (@chalumeaufelix) on Twitter photo Excited to introduce our latest neural solver, MEMENTO! Enhancing problem-specific adaptation with an explicit memory.
Thanks to my <a href="/instadeepai/">InstaDeep</a> collaborators: <a href="/RefiloeShabe/">Refiloe</a> 🇱🇸, Noah🇿🇦, <a href="/ArnuPretorius/">Arnu Pretorius</a>🇷🇼, <a href="/tomdbarrett/">Tom Barrett</a> 🇬🇧, <a href="/NGrinsztajn/">Nathan Grinsztajn</a>🇬🇧!
arxiv.org/abs/2406.16424
🧵[1/9] Excited to introduce our latest neural solver, MEMENTO! Enhancing problem-specific adaptation with an explicit memory.
Thanks to my <a href="/instadeepai/">InstaDeep</a> collaborators: <a href="/RefiloeShabe/">Refiloe</a> 🇱🇸, Noah🇿🇦, <a href="/ArnuPretorius/">Arnu Pretorius</a>🇷🇼, <a href="/tomdbarrett/">Tom Barrett</a> 🇬🇧, <a href="/NGrinsztajn/">Nathan Grinsztajn</a>🇬🇧!
arxiv.org/abs/2406.16424
🧵[1/9]](https://pbs.twimg.com/media/GSCJO3_WkAAjnAz.png)



It's amazing two of the 2024 #NobelPrize were for AI! But as they say: it took a village. "We didn't win a Nobel", a parody of Billy Joel 's "We didn't start the fire" covers a tiny sliver of this historical "village". Hope you enjoy it as much as I did making it!



Efficient LLM reasoning over large data doesn't require massive contexts! 🫡 We show that a simple in-context method, PRISM, allows a 32k token model to outperform baselines and sometimes rival a 1M token model while saving up to 54% on token cost. w/ Google DeepMind




Theory of Mind (ToM) is crucial for next gen LLM Agents, yet current benchmarks suffer from multiple shortcomings. Enter 💽 Decrypto, an interactive benchmark for multi-agent reasoning and ToM in LLMs! Work done with Timon Willi & Jakob Foerster at AI at Meta & Foerster Lab for AI Research 🧵👇

