
Gabriele Cesa
@_gabrielecesa_
PhD student @ Qualcomm AI Research, University of Amsterdam.
Developing github.com/QUVA-Lab/escnn
ID: 826810516320432128
https://github.com/Gabri95 01-02-2017 15:12:43
100 Tweet
922 Followers
643 Following

📣 This year at #ICML2024 we are hosting ✨ Gram Workshop ✨ Geometry-grounded representation learning and generative modeling. We welcome submissions in multiple tracks i.e. 📄 Proceedings, 🆕 extended abstract, 📝Blogpost/tutorial track as well as🏆 TDA challenge.

Hello #GRaM enthusiasts, Sadly our previous twitter account has been compromised, and we won't be able to get hold of it soon. Hence, we have moved to our new handle GRaM Workshop at ICML 2024. Kindly unfollow Gram Workshop and follow us GRaM Workshop at ICML 2024 for more information. ICML Conference


It's time for some GRaM Workshop at ICML 2024 workshop updates happening at #ICML2024 on the 27th of July. We have a great list of invited speakers and panelists. We have Rose Yu Rose Yu , Phillip Isola Phillip Isola , Nina Mialone Nina Miolane 🦋 @ninamiolane.bsky.social , Joey Bose Joey Bose , Zahra Kadkhodaie

GRaM Workshop at ICML 2024 Rose Yu Phillip Isola Nina Miolane 🦋 @ninamiolane.bsky.social Joey Bose We have an amazing list of contributed talks (orals) at GRaM Workshop at ICML 2024 1. Berfin will give a talk on 'Adaptive Sampling for Continuous Group Equivariant Neural Networks' 2. Yoav will talk about 'Variational Inference Failures Under Model Symmetries: Permutation Invariant

Come check my student Berfin İnal's presentation tomorrow morning at the GRaM Workshop at ICML 2024 :)


I am happy to introduce our work "Adaptive Sampling for Continuous Group Equivariant Neural Networks" with Gabriele Cesa : openreview.net/forum?id=fZNgs… Excited to have received the Best Paper Runner-up award at the GRaM Workshop at ICML 2024 ! #ICML2024 Here is a brief summary (1/4)

Very proud of my student Berfin İnal. Her thesis received the best paper runner-up award at the GRaM Workshop at ICML 2024 at #ICML24 Check the paper if you are interested in continuous equivariance and steerable networks


Does equivariance matter when you have lots of data and compute? In a new paper with Sönke Behrends, Pim de Haan, and Taco Cohen, we collect some evidence. arxiv.org/abs/2410.23179 1/7

I will be in Amsterdam doing an internship at Qualcomm AI Research with Gabriele Cesa until end of February. Hit me up if you want to link! Also, 🦋 Julian Suk.bsky.social 🦋

Our paper got a prize :) Cheers to lead author Johann Brehmer, and fellow co-authors Sönke Behrends, and Taco Cohen. Our results hint that yes, also at large scale of data and compute, if your data has symmetries, you might be better off building these into your network.

ICLR 2025 MLMP best poster award goes to "ViNE-GATr: scaling geometric algebra transformers with virtual nodes embeddings"! Congratulations Julian Suk, Gabriele Cesa , Thomas Hehn, Arash Behboodi!


Excited to be giving a talk at the Cambridge Wednesday Seminar today at 3pm. Looking forward to sharing ideas and great discussion about equivariance and beyond Thanks Pietro Lio' Riccardo Ali for inviting me! cst.cam.ac.uk/seminars/list/…

Great discussion, Chaitanya K. Joshi! We also explored this with extensive experiments in our recent paper: arxiv.org/abs/2501.01999. We find, among others, that equiv mods in a sense scale even better than non-equiv ones. Going more or less completely against the vibes from your post😅1/5

Join us for the 2nd workshop on Equivariant Vision: From Theory to Practice #CVPR2025 on June 11, at Room 101C! Enjoy the exciting talks by Robin Walters Maani Ghaffari Taco Cohen Gabriele Cesa Prof. Tess Smidt Vincent Sitzmann Tommy Mitchel More details at: equivision.github.io

