Yaron Lipman (@lipmanya) 's Twitter Profile
Yaron Lipman

@lipmanya

Research scientist @AIatMeta (FAIR), prev/visiting @WeizmannScience. Interested in generative models and deep learning of irregular/geometric data.๐ŸŽ—๏ธ

ID: 2698179823

calendar_today01-08-2014 12:34:45

286 Tweet

3,3K Followers

448 Following

Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

Reward-driven algorithms for training dynamical generative models significantly lag behind their data-driven counterparts in terms of scalability. We aim to rectify this. Adjoint Matching poster Carles Domingo-Enrich Sat 3pm & Adjoint Sampling oral Aaron Havens Mon 10am FPI

Neta Shaul (@shaulneta) 's Twitter Profile Photo

๐Ÿ“ฃI'll be at the poster session with our follow-up on Discrete Flow Matching. We derive a closed-form solution to the kinetic optimal problem for conditional velocity on discrete spaces. Into flow models? come chat! ๐Ÿ’ฌ ๐Ÿ—“Poster: Sat 10am (#191), ๐ŸŽคOral: Sat 3:30pm (6E) #ICLR2025

๐Ÿ“ฃI'll be at the poster session with our follow-up on Discrete Flow Matching. We derive a closed-form solution to the kinetic optimal problem for conditional velocity on discrete spaces.

Into flow models? come chat! ๐Ÿ’ฌ
๐Ÿ—“Poster: Sat 10am (#191), ๐ŸŽคOral: Sat 3:30pm (6E)
#ICLR2025
Peter Holderrieth (@peholderrieth) 's Twitter Profile Photo

Come to our oral presentation on Generator Matching at ICLR 2025 tomorrow (Saturday). Learn about a generative model that works for any data type and Markov process! Oral: 3:30pm (Peridot 202-203, session 6E) Poster: 10am-12:30pm #172 (Hall 3 + Hall 2B) arxiv.org/abs/2410.20587

Come to our oral presentation on Generator Matching at ICLR 2025 tomorrow (Saturday). Learn about a generative model that works for any data type and Markov process!

Oral: 3:30pm (Peridot 202-203, session 6E)
Poster: 10am-12:30pm #172 (Hall 3 + Hall 2B)

arxiv.org/abs/2410.20587
Peter Holderrieth (@peholderrieth) 's Twitter Profile Photo

Even better if friends and colleagues join you for the same session :) Our work on โ€œFlow Matching with General Discrete Pathsโ€œ will be presented by Neta Shaul briefly afterwards. Check it out, too! Paper: arxiv.org/abs/2412.03487

Neta Shaul (@shaulneta) 's Twitter Profile Photo

Got lots of questions about kinetic energy in continuous vs discrete space during my poster! Made a simple slide to help explain โ€” check it out ๐Ÿš€๐Ÿ‘‡

Got lots of questions about kinetic energy in continuous vs discrete space during my poster! Made a simple slide to help explain โ€” check it out ๐Ÿš€๐Ÿ‘‡
Neta Shaul (@shaulneta) 's Twitter Profile Photo

Had an absolute blast presenting at #ICLR2025! Thanks to everyone who came to visit my poster๐Ÿ™Œ Special shoutout to Scott H. Hawley for taking a last-minute photo ๐Ÿ“ธ

Had an absolute blast presenting at #ICLR2025! Thanks to everyone who came to visit my poster๐Ÿ™Œ Special shoutout to <a href="/drscotthawley/">Scott H. Hawley</a> for taking a last-minute photo ๐Ÿ“ธ
Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

Against conventional wisdom, I will be giving a talk with particular focus on the "how" and the various intricacies of applying stochastic control for generative modeling. Mon 9:50am Hall 1 Apex #ICLR2025 Also check out the other talks at delta-workshop.github.io!

Against conventional wisdom, I will be giving a talk with particular focus on the "how" and the various intricacies of applying stochastic control for generative modeling.

Mon 9:50am Hall 1 Apex #ICLR2025 

Also check out the other talks at delta-workshop.github.io!
Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

We've open sourced Adjoint Sampling! It's part of a bundled release showcasing FAIR's research and open source commitment to AI for science. github.com/facebookresearโ€ฆ x.com/AIatMeta/statuโ€ฆ

Lucky Iyinbor (@luckyballa) 's Twitter Profile Photo

So Flow Matching is *just* xt = mix(x0, x1, t) loss = mse((x1 - x0) - nn(xt, t)) Nice, here it is in a fragment shader :) shadertoy.com/view/tfdXRM

Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

Padding in our non-AR sequence models? Yuck. ๐Ÿ™… ๐Ÿ‘‰ Instead of unmasking, our new work *Edit Flows* perform iterative refinements via position-relative inserts and deletes, operations naturally suited for variable-length sequence generation. Easily better than using mask tokens.

Itai Gat (@itai_gat) 's Twitter Profile Photo

Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text. With: Neta Shaul Uriel Singer Yaron Lipman Link: arxiv.org/abs/2506.06215

Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text.
With: <a href="/shaulneta/">Neta Shaul</a> <a href="/urielsinger/">Uriel Singer</a> <a href="/lipmanya/">Yaron Lipman</a>
Link: arxiv.org/abs/2506.06215
Yaron Lipman (@lipmanya) 's Twitter Profile Photo

A new paper: We finetune an LLM to rethink and resample previously generated tokens, allowing to reduce sampling errors and improve performance.

A new paper: We finetune an LLM to rethink and resample previously generated tokens, allowing to reduce sampling errors and improve performance.
Guan-Horng Liu (@guanhorng_liu) 's Twitter Profile Photo

Adjoint-based diffusion samplers have simple & scalable objectives w/o impt weight complication. Like many, though, they solve degenerate Schrรถdinger bridges, despite all being SB-inspired. ๐Ÿ“ข Proudly introduce #Adjoint #Schrรถdinger #Bridge #Sampler, a full SB-based sampler that

Neta Shaul (@shaulneta) 's Twitter Profile Photo

[1/n] New paper alert! ๐Ÿš€ Excited to introduce ๐“๐ซ๐š๐ง๐ฌ๐ข๐ญ๐ข๐จ๐ง ๐Œ๐š๐ญ๐œ๐ก๐ข๐ง๐  (๐“๐Œ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model๐Ÿคฏ, achieving SOTA text-2-image generation! Uriel Singer Itai Gat Yaron Lipman

Yaron Lipman (@lipmanya) 's Twitter Profile Photo

**Transition Matching** is a new iterative generative paradigm using Flow Matching or AR models to transition between generation intermediate states, leading to an improved generation quality and speed!

Uriel Singer (@urielsinger) 's Twitter Profile Photo

Introducing Transition Matching (TM) โ€” a new generative paradigm that unifies Flow Matching and autoregressive models into one framework, boosting both quality and speed! Thank you for the great collaboration Neta Shaul Itai Gat Yaron Lipman

Neta Shaul (@shaulneta) 's Twitter Profile Photo

Difference Transition Matching (DTM) process is so simple to Illustrate, you can calculate it on a whiteboard! At each step: Draw all lines connecting source and target (shaded) โฌ‡๏ธ List those intersecting with the current state (yellow) โฌ‡๏ธ Sample a line from the list (green)

Neta Shaul (@shaulneta) 's Twitter Profile Photo

If you're curious to dive deeper into Transition Matching (TM)โœจ๐Ÿ”, a great starting point is understanding the similarities and differences between ๐ƒ๐ข๐Ÿ๐Ÿ๐ž๐ซ๐ž๐ง๐œ๐ž ๐“๐ซ๐š๐ง๐ฌ๐ข๐ญ๐ข๐จ๐ง ๐Œ๐š๐ญ๐œ๐ก๐ข๐ง๐  (๐ƒ๐“๐Œ) and Flow Matching (FM)๐Ÿ’ก.

If you're curious to dive deeper into Transition Matching (TM)โœจ๐Ÿ”, a great starting point is understanding the similarities and differences between ๐ƒ๐ข๐Ÿ๐Ÿ๐ž๐ซ๐ž๐ง๐œ๐ž ๐“๐ซ๐š๐ง๐ฌ๐ข๐ญ๐ข๐จ๐ง ๐Œ๐š๐ญ๐œ๐ก๐ข๐ง๐  (๐ƒ๐“๐Œ) and Flow Matching (FM)๐Ÿ’ก.
Neta Shaul (@shaulneta) 's Twitter Profile Photo

DTM vs FM๐Ÿ‘‡ Lots of interest in how Difference Transition Matching (DTM) connects to Flow Matching (FM). Here is a short animation that illustrates Theorem 1 in our paper: For a very small step size (1/T), DTM converges to an Euler step of FM.