Romain Tavenard (@rtavenar) 's Twitter Profile
Romain Tavenard

@rtavenar

Prof. @UnivRennes_2.
ML for time series.
Maintainer of #tslearn package

ID: 761528741168312320

linkhttp://rtavenar.github.io calendar_today05-08-2016 11:46:15

530 Tweet

369 Followers

156 Following

Gael Varoquaux 🦋 (@gaelvaroquaux) 's Twitter Profile Photo

Un évènement sur l'IA où je participais a été chamboulé par Extinction Rébellion (mauvaise cible: ça parlait des limites de l'IA). C'est symptomatique du contexte de communication contrôlée ne laissant aux voix dissonantes uniquement une place violente.

Nicolas Keriven (@n_keriven) 's Twitter Profile Photo

I'm thrilled to announce that my #ERCStG project has been accepted 🤓 **MALAGA: Reinventing the Theory of Machine Learning on Large Graphs** Many job openings coming up, see nkeriven.github.io/malaga for updates! Thank you European Research Council (ERC) and all my collaborators past and future

Ambroise Odonnat (@ambroiseodonnat) 's Twitter Profile Photo

🤗Officially started Ph.D. with Ievgen Redko, Romain Tavenard and Laetitia Chapel Inria IRISA on Transformers & Distribution Shifts 🥳🇨🇦 Also, 2 papers accepted at #NeurIPS2024 📈 *Spotlight* arxiv.org/pdf/2406.10327 ✋🏾 MaNO arxiv.org/pdf/2405.18979 More details soon!

🤗Officially started Ph.D. with <a href="/IevgenRedko/">Ievgen Redko</a>, <a href="/rtavenar/">Romain Tavenard</a>  and Laetitia Chapel <a href="/Inria/">Inria</a> <a href="/irisa_lab/">IRISA</a> on Transformers &amp; Distribution Shifts

🥳🇨🇦 Also, 2 papers accepted at #NeurIPS2024

📈 *Spotlight* arxiv.org/pdf/2406.10327 
✋🏾 MaNO arxiv.org/pdf/2405.18979  

More details soon!
Samuel Vaiter (@vaiter) 's Twitter Profile Photo

Stein's Lemma states that for a normally distributed variable X, the expected value E[Xg(X)] = E[g’(X)] for any g absolutely continuous (derivative a.e.) such that E[|g’(X)|] < ∞. It is a central result for characterizing Gaussian data arxiv.org/abs/2104.08302

Stein's Lemma states that for a normally distributed variable X, the expected value E[Xg(X)] = E[g’(X)] for any g absolutely continuous (derivative a.e.) such that E[|g’(X)|] &lt; ∞. It is a central result for characterizing Gaussian data arxiv.org/abs/2104.08302
Ievgen Redko (@ievgenredko) 's Twitter Profile Photo

A simple, yet overlooked idea: LLMs with a finite vocabulary and context window are (finite) Markov chains :) An 🤩 internship of @oussamazekri_in collaboration with now officially our 1st year Ph.D. student Ambroise Odonnat Abdelhakim Benechehab @bleistein_linus & N. Boullé A 🧵⬇️

Björn Michele (@bjoern_michele) 's Twitter Profile Photo

🚨 Interested in domain adaptation and generalization for 3D data? 🚨 It’s tough to keep up with all the new amazing work. That’s why I’ve curated a comprehensive, easy-to-navigate list of publications. 📚 👉 Repo: bit.ly/DA_DG_3D

🚨 Interested in domain adaptation and generalization for 3D data? 🚨

It’s tough to keep up with all the new amazing work.

That’s why I’ve curated a comprehensive, easy-to-navigate list of publications. 📚

👉 Repo: bit.ly/DA_DG_3D
Sibylle Marcotte (@sibyllemarcotte) 's Twitter Profile Photo

🏆Didn't get the Physics Nobel Prize this year, but really excited to share that I've been named one of the #FWIS2024 Fondation L'Oréal-UNESCO 🏛️ #Education #Sciences #Culture 🇺🇳 French Young Talents alongside 34 amazing young researchers! This award recognizes my research on deep learning theory #WomenInScience 👩‍💻

🏆Didn't get the Physics Nobel Prize this year, but really excited to share that I've been named one of the #FWIS2024 <a href="/FondationLOreal/">Fondation L'Oréal</a>-<a href="/UNESCO/">UNESCO 🏛️ #Education #Sciences #Culture 🇺🇳</a> French Young Talents alongside 34 amazing young researchers! This award recognizes my research on deep learning theory #WomenInScience 👩‍💻
Samuel Vaiter (@vaiter) 's Twitter Profile Photo

Bilevel optimization problems with multiple inner solutions come typically in two flavors: optimistic and pessimistic. Optimistic assumes the inner problem selects the best solution for the outer objective, while pessimistic assumes the worst-case solution is chosen.

Bilevel optimization problems with multiple inner solutions come typically in two flavors: optimistic and pessimistic. Optimistic assumes the inner problem selects the best solution for the outer objective, while pessimistic assumes the worst-case solution is chosen.
Samuel Vaiter (@vaiter) 's Twitter Profile Photo

Brenier's theorem states that the optimal transport map between two probability measures for quadratic cost is the gradient of a convex function. Moreover, it is uniquely defined up to a Lebesgue negligible set. ceremade.dauphine.fr/~carlier/Breni…

Brenier's theorem states that the optimal transport map between two probability measures for quadratic cost is the gradient of a convex function. Moreover, it is uniquely defined up to a Lebesgue negligible set. ceremade.dauphine.fr/~carlier/Breni…
Samuel Vaiter (@vaiter) 's Twitter Profile Photo

A Monge map, i.e., a solution to optimal transport Monge problems, may not always exist, be unique, or be symmetric with respect to the source and target distributions. It was one of the motivation to introduce Kantorovich relaxation. math.univ-lyon1.fr/~santambrogio/…

A Monge map, i.e., a solution to optimal transport Monge problems, may not always exist, be unique, or be symmetric with respect to the source and target distributions. It was one of the motivation to introduce Kantorovich relaxation. math.univ-lyon1.fr/~santambrogio/…
Nicolas Keriven (@n_keriven) 's Twitter Profile Photo

I have several internship/PhD positions on Graph ML available with ERC MALAGA 🤓 See details here: nkeriven.github.io/malaga/ Don't hesitate to contact me! (same if you are interested in a post-doc related to graph ML)

Samuel Vaiter (@vaiter) 's Twitter Profile Photo

I have multiple openings for M2 internship / PhD / postdoc in Nice (France) on topics related to bilevel optimization, automatic differentiation and safe machine learning. More details on my webpage samuelvaiter.com Contact me by email, and feel free to forward/RT :)

Nicolas Keriven (@n_keriven) 's Twitter Profile Photo

Our Transactions on Machine Learning Research paper "Gradient scarcity with Bilevel Optimization for Graph Learning" (w/ H Ghanem, Samuel Vaiter ) was accepted as an oral presentation at Learning on Graphs Conference 2025 🤓 100% free and online, come check it out! arxiv.org/abs/2303.13964

Rémi Flamary 🦋 (@rflamary) 's Twitter Profile Photo

We have an accepted paper at NeurIPS 2024 about supervised graph prediction with transformers and optimal transport loss with Paul Krzakala, Junjie Yang, Florence d’Alché-Buc, Charlotte Laclau and Matthieu Labeau. Details in 🧵 arxiv.org/pdf/2402.12269

We have an accepted paper at NeurIPS 2024 about supervised graph prediction with transformers and optimal transport loss with Paul Krzakala, Junjie Yang, Florence d’Alché-Buc, Charlotte Laclau and Matthieu Labeau. Details in 🧵 arxiv.org/pdf/2402.12269
Samuel Vaiter (@vaiter) 's Twitter Profile Photo

📣 New preprint 📣 **Differentiable Generalized Sliced Wasserstein Plans** w/ L. Chapel Romain Tavenard We propose a Generalized Sliced Wasserstein method that provides an approximated transport plan and which admits a differentiable approximation. arxiv.org/abs/2505.22049 1/5

📣 New preprint 📣  

**Differentiable Generalized Sliced Wasserstein Plans**

w/
L. Chapel
<a href="/rtavenar/">Romain Tavenard</a> 

We propose a Generalized Sliced Wasserstein method that provides an approximated transport plan and which admits a differentiable approximation.  

arxiv.org/abs/2505.22049 1/5