Carles Domingo-Enrich (@cdomingoenrich) 's Twitter Profile
Carles Domingo-Enrich

@cdomingoenrich

Senior Researcher @ Microsoft Research New England. Formerly: Visiting Researcher @ Meta FAIR and CS PhD @ NYU.

ID: 1835318940391591938

linkhttps://cdenrich.github.io calendar_today15-09-2024 14:06:42

18 Tweet

501 Followers

120 Following

Statistics Papers (@statspapers) 's Twitter Profile Photo

Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control. arxiv.org/abs/2409.08861

Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

New paper! We cast reward fine-tuning as stochastic control. 1. We prove that a specific noise schedule *must* be used for fine-tuning. 2. We propose a novel algorithm that is significantly better than the adjoint method*. (*this is an insane claim) arxiv.org/abs/2409.08861

New paper! We cast reward fine-tuning as stochastic control.

1. We prove that a specific noise schedule *must* be used for fine-tuning.

2. We propose a novel algorithm that is significantly better than the adjoint method*.

(*this is an insane claim)

arxiv.org/abs/2409.08861
Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

This is a really jam-packed and exciting work. I'm extremely proud of Carles Domingo-Enrich for taking on an ambitious project at FAIR, resulting in this absolutely outstanding paper.

This is a really jam-packed and exciting work. 

I'm extremely proud of <a href="/cdomingoenrich/">Carles Domingo-Enrich</a> for taking on an ambitious project at FAIR, resulting in this absolutely outstanding paper.
Guan-Horng Liu (@guanhorng_liu) 's Twitter Profile Photo

Congrats on Carles Domingo-Enrich Ricky T. Q. Chen on such a nice paper using #stochastic #optimal #control for fine-tuning (controlling) pre-trained models (base dynamics) given task-specific criteria (reward / cost)!

Portal (@_portal_) 's Twitter Profile Photo

Next Monday at 12 pm ET, Carles Domingo-Enrich will join LoGG host Hannes Stärk to discuss stochastic optimal control (SOC) and a new algorithm named Adjoint Matching. 📄 Read the paper: arxiv.org/abs/2409.08861 ⏺️ Tune in live: zoom.us/j/5775722530?p…

Next Monday at 12 pm ET, <a href="/cdomingoenrich/">Carles Domingo-Enrich</a> will join LoGG host <a href="/HannesStaerk/">Hannes Stärk</a> to discuss stochastic optimal control (SOC) and a new algorithm named Adjoint Matching.

📄 Read the paper: arxiv.org/abs/2409.08861

⏺️ Tune in live: zoom.us/j/5775722530?p…
Hannes Stärk (@hannesstaerk) 's Twitter Profile Photo

On Monday, Carles Domingo-Enrich will present his new "Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control" arxiv.org/abs/2409.08861 Quite fresh off the press! Monday zoom at 9am PT / 12pm ET / 6pm CEST: portal.valencelabs.com/logg

On Monday, <a href="/cdomingoenrich/">Carles Domingo-Enrich</a> will present his new "Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control" arxiv.org/abs/2409.08861

Quite fresh off the press!
Monday zoom at 9am PT / 12pm ET / 6pm CEST: portal.valencelabs.com/logg
Carles Domingo-Enrich (@cdomingoenrich) 's Twitter Profile Photo

New paper! A taxonomy of loss functions for stochastic optimal control: arxiv.org/pdf/2410.00345 If our recent work on Adjoint Matching (arxiv.org/abs/2409.08861) made you want to learn about deep-learning SOC techniques, check my systematic study on all available losses!

Kevin K. Yang 楊凱筌 (@kevinkaichuang) 's Twitter Profile Photo

Happy to release a concise review of recent progress in sequence-structure co-generation for protein design. with our amazing intern chentongwang + Sarah Alamdari Carles Domingo-Enrich and Ava Amini

Happy to release a concise review of recent progress in sequence-structure co-generation for protein design. 

with our amazing intern <a href="/WChentong/">chentongwang</a> + <a href="/SarahAlamdari/">Sarah Alamdari</a> Carles Domingo-Enrich and <a href="/avapamini/">Ava Amini</a>
Carles Domingo-Enrich (@cdomingoenrich) 's Twitter Profile Photo

Undergrad internship opportunities at MSR! If you are a rising junior or senior undergraduate student interested in working with me, apply here and mention my name: aka.ms/msr-ugrad. Topics: fine-tuning and inference of generative models for continuous and discrete data.

Peter Holderrieth (@peholderrieth) 's Twitter Profile Photo

New paper out! We introduce “Generator Matching” (GM), a method to build GenAI models for any data type (incl. multimodal) with any Markov process. GM unifies a range of state-of-the-art models and enables new designs of generative models. arxiv.org/abs/2410.20587 (1/5)

New paper out!

We introduce “Generator Matching” (GM), a method to build GenAI models for any data type (incl. multimodal) with any Markov process. GM unifies a range of state-of-the-art models and enables new designs of generative models.

arxiv.org/abs/2410.20587

(1/5)
Anuroop Sriram (@anuroopsriram) 's Twitter Profile Photo

I’m excited to share our latest work on generative models for materials called FlowLLM. FlowLLM combines Large Language Models and Riemannian Flow Matching in a simple, yet surprisingly effective way for generating materials. arxiv.org/abs/2410.23405 Benjamin Kurt Miller Ricky T. Q. Chen Brandon Wood

John Nguyen (@__johnnguyen__) 's Twitter Profile Photo

This paper didn’t get the attention it deserves. Have you ever wondered why RLHF/reward fine-tuning is far less common for image generation models compared to large language models (LLMs)? This paper by Ricky T. Q. Chen and team explains it in details. 🧵

This paper didn’t get the attention it deserves. Have you ever wondered why RLHF/reward fine-tuning is far less common for image generation models compared to large language models (LLMs)?

This paper by <a href="/RickyTQChen/">Ricky T. Q. Chen</a> and team explains it in details. 
🧵
Peter Holderrieth (@peholderrieth) 's Twitter Profile Photo

Our MIT class “6.S184: Introduction to Flow Matching and Diffusion Models” is now available on YouTube! We teach state-of-the-art generative AI algorithms for images, videos, proteins, etc. together with the mathematical tools to understand them. diffusion.csail.mit.edu (1/4)

Our MIT class “6.S184: Introduction to Flow Matching and Diffusion Models” is now available on YouTube!

We teach state-of-the-art generative AI algorithms for images, videos, proteins, etc. together with the mathematical tools to understand them.

diffusion.csail.mit.edu

(1/4)
Aaron Havens (@aaronjhavens) 's Twitter Profile Photo

New paper out with FAIR(+FAIR-Chemistry): Adjoint Sampling: Highly Scalable Diffusion Samplers via Adjoint Matching We present a scalable method for sampling from unnormalized densities beyond classical force fields. 📄: arxiv.org/abs/2504.11713

Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

Reward-driven algorithms for training dynamical generative models significantly lag behind their data-driven counterparts in terms of scalability. We aim to rectify this. Adjoint Matching poster Carles Domingo-Enrich Sat 3pm & Adjoint Sampling oral Aaron Havens Mon 10am FPI

Carles Domingo-Enrich (@cdomingoenrich) 's Twitter Profile Photo

🚀Excited to open source the code for Adjoint Matching --- as part of a new repo centered around reward fine-tuning via stochastic optimal control! github.com/microsoft/soc-…

Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

Against conventional wisdom, I will be giving a talk with particular focus on the "how" and the various intricacies of applying stochastic control for generative modeling. Mon 9:50am Hall 1 Apex #ICLR2025 Also check out the other talks at delta-workshop.github.io!

Against conventional wisdom, I will be giving a talk with particular focus on the "how" and the various intricacies of applying stochastic control for generative modeling.

Mon 9:50am Hall 1 Apex #ICLR2025 

Also check out the other talks at delta-workshop.github.io!