Ghada Sokar (@g_sokar) 's Twitter Profile
Ghada Sokar

@g_sokar

ID: 1175712956807598080

calendar_today22-09-2019 10:06:36

608 Tweet

416 Followers

363 Following

Ghada Sokar (@g_sokar) 's Twitter Profile Photo

Time for celebrating🎉 One paper accepted for an oral presentation at #ICML2023! I was very lucky to work on this project with amazing collaborators utku , Pablo Samuel Castro , Rishabh Agarwal ! Details soon✨ See you in Hawaii🏝️

Ghada Sokar (@g_sokar) 's Twitter Profile Photo

Don't miss Bram Grooten 's presentation at #ICLR2023 this Friday, where we will be sharing our work on how to deal with noisy environments in RL.

Bram Grooten (@bramgrooten) 's Twitter Profile Photo

Looking forward to presenting our full-paper at @aamas2023 on Friday at 14:00 in the #DeepLearning session. If you're here, come check it out at poster 34 🙂 📜arxiv.org/abs/2302.06548

Looking forward to presenting our full-paper at @aamas2023 on Friday at 14:00 in the #DeepLearning session. If you're here, come check it out at poster 34 🙂

📜arxiv.org/abs/2302.06548
Rishabh Agarwal (@agarwl_) 's Twitter Profile Photo

2. The Dormant Neuron Phenomenon in Deep RL (led by Ghada Sokar w/t utku and Pablo Samuel Castro). Reviewers loved this work and it was accepted as an oral. x.com/pcastr/status/…

ContinualAI (@continualai) 's Twitter Profile Photo

📢CLAI Unconf pre-registration paper proposals are due soon (July 21)!📢 Have a scientific hypothesis? Ready to publish in PMLR? Chase SCIENCE instead of SOTA by submitting your idea with us! It is not too late to get started! 👇 unconf.continualai.org/call-for-papers

Pablo Samuel Castro (@pcastr) 's Twitter Profile Photo

with utku and Rishabh Agarwal i'll be presenting work led by Ghada Sokar on the dormant neuron phenomenon in deep RL. utku & i give oral on tue at 5:30pm in Ballroom C then with rishabh we'll present the poster in sess 3, wed at 11am, Exhibit Hall 1 #709 x.com/pcastr/status/…

with <a href="/utkuevci/">utku</a> and <a href="/agarwl_/">Rishabh Agarwal</a> i'll be presenting work led by <a href="/g_sokar/">Ghada Sokar</a> on the dormant neuron phenomenon in deep RL.
utku &amp; i give oral on tue at 5:30pm in Ballroom C

then with rishabh we'll present the poster in sess 3, wed at 11am, Exhibit Hall 1 #709

x.com/pcastr/status/…
Gintare Karolina Dziugaite (@gkdziugaite) 's Twitter Profile Photo

Deep learning may be hard, but deep un-learning is even harder. 💪 How do we efficiently remove the influence of specific training examples while maintaining good performance on the remainder? Announcing NeurIPS Unlearning Competition 📢 Submit your best ideas!🏆

ContinualAI (@continualai) 's Twitter Profile Photo

The excitement knows no limits🎉 🚨Only 1 hour to go until the start of CLAI Unconf, but you can still continue joining! 👋Come be part of one of the largest continual learning gatherings with over 400 participants! It's all virtual - it's all free: unconf.continualai.org/register

AK (@_akhaliq) 's Twitter Profile Photo

Google Deepmind presents Mixtures of Experts Unlock Parameter Scaling for Deep RL paper page: huggingface.co/papers/2402.08… The recent rapid progress in (self) supervised learning models is in large part predicted by empirical scaling laws: a model's performance scales proportionally

Google Deepmind presents Mixtures of Experts Unlock Parameter Scaling for Deep RL

paper page: huggingface.co/papers/2402.08…

The recent rapid progress in (self) supervised learning models is in large part predicted by empirical scaling laws: a model's performance scales proportionally
Pablo Samuel Castro (@pcastr) 's Twitter Profile Photo

📢Mixtures of Experts unlock parameter scaling for deep RL! Adding MoEs, and in particular Soft MoEs, to value-based deep RL agents results in more parameter-scalable models. Performance keeps increasing as we increase number of experts (green line below)! 1/9

📢Mixtures of Experts unlock parameter scaling for deep RL!

Adding MoEs, and in particular Soft MoEs, to value-based deep RL agents results in more parameter-scalable models.

Performance keeps increasing as we increase number of experts (green line below)!
1/9
Pablo Samuel Castro (@pcastr) 's Twitter Profile Photo

📢Don't flatten, tokenize!📢 tl;dr: the key reason for softmoe's efficacy in deep RL turns out to be tokenization! i.e. the common practice of flattening the output of conv encoder layers is quite suboptimal! 👇🏾more details in thread below👇🏾 1/11

📢Don't flatten, tokenize!📢

tl;dr: the key reason for softmoe's efficacy in deep RL turns out to be tokenization!

i.e. the common practice of flattening the output of conv encoder layers is quite suboptimal!

👇🏾more details in thread below👇🏾
1/11