Uriel Singer (@urielsinger) 's Twitter Profile
Uriel Singer

@urielsinger

Research Scientist @ Meta AI Research

ID: 3847421536

calendar_today02-10-2015 23:51:20

33 Tweet

305 Followers

59 Following

AK (@_akhaliq) 's Twitter Profile Photo

Meta presents EMU VIDEO: Factorizing Text-to-Video Generation by Explicit Image Conditioning blog: emu-video.metademolab.com paper: emu-video.metademolab.com/assets/emu_vid… present EMU VIDEO, a text-to-video generation model that factorizes the generation into two steps: first generating an image

Shelly Sheynin (@shellysheynin) 's Twitter Profile Photo

To support future research for instruction-based image editing, we publicly release a diverse and challenging benchmark that includes 7 different image editing tasks. Link to the benchmark: huggingface.co/datasets/faceb… And here is an example of edits on my own imageπŸ˜‡

Amit Zohar (@amit_zhr) 's Twitter Profile Photo

Excited to share our recent work! πŸŽ₯πŸ“ We propose an unsupervised method that achieves a new state-of-the-art in text-based video editing πŸš€ Check it out: fdd-video-edit.github.io W/ the amazing Uriel Singer, Yuval Kirstain, Shelly Sheynin, Adam Polyak, Devi Parikh, and Yaniv Taigman

Hila Chefer (@hila_chefer) 's Twitter Profile Photo

Introducing✨Still-Movingβœ¨β€”our work from Google DeepMind that lets you apply *any* image customization method to video modelsπŸŽ₯ Personalization (DreamBooth)🐢stylization (StyleDrop) 🎨 ControlNetπŸ–ΌοΈβ€”ALL in one method! Plus… you can control the amount of generated motionπŸƒβ€β™€οΈ πŸ§΅πŸ‘‡

Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

We are presenting 3 orals and 1 spotlight at #ICLR2025 on two primary topics: On generalizing the data-driven flow matching algorithm to jump processes, arbitrary discrete corruption processes, and beyond. And on highly scalable algorithms for reward-driven learning settings.

Michael Hassid (@michaelhassid) 's Twitter Profile Photo

The longer reasoning LLM thinks - the more likely to be correct, right? Apparently not. Presenting our paper: β€œDon’t Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning”. Link: arxiv.org/abs/2505.17813 1/n

The longer reasoning LLM thinks - the more likely to be correct, right?

Apparently not.

Presenting our paper: β€œDon’t Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning”.

Link: arxiv.org/abs/2505.17813

1/n
Hila Chefer (@hila_chefer) 's Twitter Profile Photo

Beyond excited to share FlowMo! We found that the latent representations by video models implicitly encode motion information, and can guide the model toward coherent motion at inference time Very proud of ariel shaulov Itay Hazan for this work! Plus, it’s open source! πŸ₯³

Hila Chefer (@hila_chefer) 's Twitter Profile Photo

Exciting news from #ICML2025 & #ICCV2025 πŸ₯³ - πŸ₯‡ VideoJAM accepted as *oral* at #ICML2025 (top 1%) - Two talks at #ICCV2025 ☝️interpretability in the generative era ✌️video customization - Organizing two #ICCV2025 workshops ☝️structural priors for vision ✌️long video gen πŸ§΅πŸ‘‡

Itai Gat (@itai_gat) 's Twitter Profile Photo

Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text. With: Neta Shaul Uriel Singer Yaron Lipman Link: arxiv.org/abs/2506.06215

Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text.
With: <a href="/shaulneta/">Neta Shaul</a> <a href="/urielsinger/">Uriel Singer</a> <a href="/lipmanya/">Yaron Lipman</a>
Link: arxiv.org/abs/2506.06215
Yaron Lipman (@lipmanya) 's Twitter Profile Photo

**Transition Matching** is a new iterative generative paradigm using Flow Matching or AR models to transition between generation intermediate states, leading to an improved generation quality and speed!

Uriel Singer (@urielsinger) 's Twitter Profile Photo

Introducing Transition Matching (TM) β€” a new generative paradigm that unifies Flow Matching and autoregressive models into one framework, boosting both quality and speed! Thank you for the great collaboration Neta Shaul Itai Gat Yaron Lipman

moab.arar (@ararmoab) 's Twitter Profile Photo

This paper is awesome. πŸ”₯ Flow-matching for flow-matching! ❌No more coarse-to-fine generation. πŸš€Coarse and fine details emerge together during generation. πŸ†Results look super promising, especially when you see how the images evolve.

Neta Shaul (@shaulneta) 's Twitter Profile Photo

Difference Transition Matching (DTM) process is so simple to Illustrate, you can calculate it on a whiteboard! At each step: Draw all lines connecting source and target (shaded) ⬇️ List those intersecting with the current state (yellow) ⬇️ Sample a line from the list (green)

Neta Shaul (@shaulneta) 's Twitter Profile Photo

If you're curious to dive deeper into Transition Matching (TM)βœ¨πŸ”, a great starting point is understanding the similarities and differences between πƒπ’πŸπŸπžπ«πžπ§πœπž π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (πƒπ“πŒ) and Flow Matching (FM)πŸ’‘.

If you're curious to dive deeper into Transition Matching (TM)βœ¨πŸ”, a great starting point is understanding the similarities and differences between πƒπ’πŸπŸπžπ«πžπ§πœπž π“π«πšπ§π¬π’π­π’π¨π§ 𝐌𝐚𝐭𝐜𝐑𝐒𝐧𝐠 (πƒπ“πŒ) and Flow Matching (FM)πŸ’‘.