Rob Brekelmans (@brekelmaniac) 's Twitter Profile
Rob Brekelmans

@brekelmaniac

postdoc @vectorinst, marketing for job
(phd @usc_isi, intern @googledeepmind)

ID: 1090076247860756480

linkhttps://brekelma.github.io/ calendar_today29-01-2019 02:36:44

144 Tweet

755 Followers

320 Following

Kirill Neklyudov (@k_neklyudov) 's Twitter Profile Photo

Wasserstein Lagrangian Flows explain many different dynamics on the space of distributions from a single perspective. arxiv.org/abs/2310.10649 I made a video explaining our (with Rob Brekelmans) #icml2024 paper about WLF. Like subscribe share, lol. youtu.be/kkddiLegc3s?si…

Rob Brekelmans (@brekelmaniac) 's Twitter Profile Photo

Come check out our presentations today! #ICML2024 Probabilistic Inference via Twisted Sequential Monte Carlo (Oral, 11:15-11:30, Hall A8) (Poster 814 morning) x.com/AliMakhzani/st… Wasserstein Lagrangian Flows (Poster 1408 afternoon) x.com/k_neklyudov/st…

Rob Brekelmans (@brekelmaniac) 's Twitter Profile Photo

Simulation-free training for scientific problems with aligned data (e.g. transition path sampling)! stripping away the science applications, we learn a distribution path to solve Schrödinger bridge w/nonlinear reference drift and δ-fn endpoints (i.e. no coupling optimization)

Kirill Neklyudov (@k_neklyudov) 's Twitter Profile Photo

🧵(5/5) The main result that unlocks all these possibilities is our new Itô density estimator, an efficient way to estimate the density of the generated samples for an already-trained diffusion model (assuming that we know the score). It does not require any extra computations,

🧵(5/5) The main result that unlocks all these possibilities is our new Itô density estimator, an efficient way to estimate the density of the generated samples for an already-trained diffusion model (assuming that we know the score). It does not require any extra computations,
Raghav Singhal (@_rk_singhal) 's Twitter Profile Photo

Got a diffusion model? What if there were a way to: - Get SOTA text-to-image prompt fidelity, with no extra training! - Steer continuous and discrete (e.g. text) diffusions - Beat larger models using less compute - Outperform fine-tuning - And keep your stats friends happy !?

Got a diffusion model?

What if there were a way to:
- Get SOTA text-to-image prompt fidelity, with no extra training!
- Steer continuous and discrete (e.g. text) diffusions
- Beat larger models using less compute
- Outperform fine-tuning
- And keep your stats friends happy !?
Qiang Liu (@lqiang67) 's Twitter Profile Photo

🚀 New Rectified Flow materials (WIP)! 📖 Tutorials: rectifiedflow.github.io 💻 Code: github.com/lqiang67/recti… 📜 Notes: cs.utexas.edu/~lqiang/PDF/fl… Contributions from @RunlongLiao, Xixi Hu, Bo Liu, and many others! 🔥 Let us know your thoughts! 🚀

Wei Guo (@weiguo01) 's Twitter Profile Photo

How annealing helps overcoming multimodality? In our ICLR 2025 paper openreview.net/forum?id=P6IVI… and preprint arxiv.org/abs/2502.04575, we established the first complexity bound for annealed sampling and normalizing constant (⇔free energy) estimation under weak assumptions on target!

How annealing helps overcoming multimodality? In our ICLR 2025 paper openreview.net/forum?id=P6IVI… and preprint arxiv.org/abs/2502.04575, we established the first complexity bound for annealed sampling and normalizing constant (⇔free energy) estimation under weak assumptions on target!
Jeremias Knoblauch (@lauchlab) 's Twitter Profile Photo

We are excited to announce that registration for the inaugural post-Bayes workshop on May 15./16. at UCL is now open! Website: postbayes.github.io/workshop2025/ Registration link: tinyurl.com/postBayesWorks…

Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

We are presenting 3 orals and 1 spotlight at #ICLR2025 on two primary topics: On generalizing the data-driven flow matching algorithm to jump processes, arbitrary discrete corruption processes, and beyond. And on highly scalable algorithms for reward-driven learning settings.

Rob Brekelmans (@brekelmaniac) 's Twitter Profile Photo

Given q_t, r_t as diffusion model(s), an SDE w/drift β ∇ log q_t + α ∇ log r_t doesn’t sample the sequence of geometric avg/product/tempered marginals! To correct this, we derive an SMC scheme via PDE perspective Resampling weights are ‘free’, depend only on (exact) scores!

James Thornton (@jamestthorn) 's Twitter Profile Photo

Small plug, not really advertised but we similarly showed how to perform temperature based control and composition of separately trained diffusion models via SMC and the Feynman Kac model formalism, with score distillation of the energy at AISTATS last year - Diversity control

Small plug, not really advertised but we similarly showed how to perform temperature based control   and composition of separately trained diffusion models via SMC and the Feynman Kac model formalism, with score distillation of the energy at AISTATS last year

- Diversity control