Clément (@clement_wh) 's Twitter Profile
Clément

@clement_wh

PhD student in machine learning for genomics at Institut Pasteur (Cantini Lab) and MGH Harvard (Pinello Lab)

🦋 clement-wh.bsky.social

ID: 1592594012610756608

linkhttps://clement-w.github.io/ calendar_today15-11-2022 19:03:16

12 Tweet

11 Followers

27 Following

elvis (@omarsar0) 's Twitter Profile Photo

2022: A Year in Review (ML Papers Edition) In this thread, let's take a look at some of the top trending ML papers of 2022 ↓

2022: A Year in Review (ML Papers Edition)

In this thread, let's take a look at some of the top trending ML papers of 2022 ↓
Prof. Anima Anandkumar (@animaanandkumar) 's Twitter Profile Photo

Fully agree! Neural operators = conditional neural fields That should make neural operators more accessible to all computer vision researchers and engineers working on NeRFs. However, even though the two are equivalent, the difference arises in terms of intuitions and design

Valentin De Bortoli (@valentindebort1) 's Twitter Profile Photo

If you want to chat about flow matching, stochastic interpolants, diffusion models and their links with OT (Schrödinger Bridge 🌉 hehe) catch me at poster #619 (Thurs 14 10:45-12:45) where I will be presenting "Diffusion Schrödinger Bridge Matching" (arxiv.org/abs/2303.16852).

Prof. Anima Anandkumar (@animaanandkumar) 's Twitter Profile Photo

How do we capture local features across multiple resolutions? While standard convolutional layers work only on a fixed input-resolution, we design local neural operators that learn integral and differential kernels, and are principled ways to extend standard convolutions to

How do we capture local features across multiple resolutions? While standard convolutional layers work only on a fixed input-resolution, we design local neural operators that learn integral and differential kernels, and are principled ways to extend standard convolutions to
Ziming Liu (@zimingliu11) 's Twitter Profile Photo

MLPs are so foundational, but are there alternatives? MLPs place activation functions on neurons, but can we instead place (learnable) activation functions on weights? Yes, we KAN! We propose Kolmogorov-Arnold Networks (KAN), which are more accurate and interpretable than MLPs.🧵

Gabriel Peyré (@gabrielpeyre) 's Twitter Profile Photo

SNE and t-SNE perform dimensionality reduction by minimizing relative entropy (KL) between high and low dimensional spaces. t-SNE improves over SNE by replacing Gaussians with students’ t-distributions in the low dimensional space. en.wikipedia.org/wiki/T-distrib…

Floor Eijkelboom (@feijkelboom) 's Twitter Profile Photo

Flow Matching goes Variational! 🐳 In recent work, we derive a formulation of flow matching as variational inference, obtaining regular FM as a special case. Joint work with dream team Grigory Bartosh, @chris_naesseth, Max Welling, and Jan-Willem van de Meent. 📜arxiv.org/abs/2406.04843 🧵1/11

Geert-Jan Huizing (@gjhuizing) 's Twitter Profile Photo

🎉 New preprint! biorxiv.org/content/10.110… STORIES learns a differentiation potential from spatial transcriptomics profiled at several time points using Fused Gromov-Wasserstein, an extension of Optimal Transport. Gabriel Peyré @LauCan88

🎉 New preprint! biorxiv.org/content/10.110… STORIES learns a differentiation potential from spatial transcriptomics profiled at several time points using Fused Gromov-Wasserstein, an extension of Optimal Transport. <a href="/gabrielpeyre/">Gabriel Peyré</a> @LauCan88
Jérémie Kalfon (@jkobject) 's Twitter Profile Photo

🚨🚨 AI in Bio release 🧬  Very happy to share my work on a Large Cell Model for Gene Network Inference. It is for now just a preprint and more is to come. We are asking the question: “What can 50M cells tell us about gene networks?” ❓Behind it, other questions arose like:

🚨🚨 AI in Bio release 🧬  

Very happy to share my work on a Large Cell Model for Gene Network Inference. It is for now just a preprint and more is to come. We are asking the question: “What can 50M cells tell us about gene networks?”

❓Behind it, other questions arose like:
Sasha Luccioni, PhD 🦋🌎✨🤗 (@sashamtl) 's Twitter Profile Photo

Is bigger really better in AI, or is it just a fallacy? Check out this banger from Meredith Whittaker, Gael Varoquaux 🦋 and me, in which we flip the script on what works, what's sustainable, and what we should care about as a community: 👇🏼👇🏼👇🏼 arxiv.org/abs/2409.14160

Laura Cantini (@cantinilab) 's Twitter Profile Photo

Welcome to two new lab members Clément (cosupervision Luca Pinello )and Anna Audit (cosupervision Gabriel Peyré )! Recruited on European Research Council (ERC) project MULTI-viewCELL. They will work on GRN inference and cell trajectory inference from temporal single-cell RNA data, respectively

Jérémie Kalfon (@jkobject) 's Twitter Profile Photo

I am super happy to annonce that after 9 months under review the first paper of my PhD: scPRINT is finally available on Nature Comms! 🎉🧬 nature.com/articles/s4146…