
Uriel Singer
@urielsinger
Research Scientist @ Meta AI Research
ID: 3847421536
02-10-2015 23:51:20
33 Tweet
305 Followers
59 Following

Meta presents EMU VIDEO: Factorizing Text-to-Video Generation by Explicit Image Conditioning blog: emu-video.metademolab.com paper: emu-video.metademolab.com/assets/emu_vid⦠present EMU VIDEO, a text-to-video generation model that factorizes the generation into two steps: first generating an image


Thanks for sharing our work! joint work with Omri Puny, Itai Gat, Brian Karrer, Uriel Singer and Yaron Lipman

Thank you AK for sharing our recent work, Emu Video Edit, on video editing! Project page: fdd-video-edit.github.io An amazing collaboration with Amit Zohar, Yuval Kirstain, Shelly Sheynin, Adam Polyak, Devi Parikh, and Yaniv Taigman

Excited to share our recent work! π₯π We propose an unsupervised method that achieves a new state-of-the-art in text-based video editing π Check it out: fdd-video-edit.github.io W/ the amazing Uriel Singer, Yuval Kirstain, Shelly Sheynin, Adam Polyak, Devi Parikh, and Yaniv Taigman


Thrilled to announce that our paper has been accepted for an Oral presentation at #ECCV2024! See you in Milan! With Uriel Singer, Yuval Kirstain, Shelly Sheynin, Adam Polyak, Devi Parikh, and Yaniv Taigman

Introducingβ¨Still-Movingβ¨βour work from Google DeepMind that lets you apply *any* image customization method to video modelsπ₯ Personalization (DreamBooth)πΆstylization (StyleDrop) π¨ ControlNetπΌοΈβALL in one method! Plusβ¦ you can control the amount of generated motionπββοΈ π§΅π




Beyond excited to share FlowMo! We found that the latent representations by video models implicitly encode motion information, and can guide the model toward coherent motion at inference time Very proud of ariel shaulov Itay Hazan for this work! Plus, itβs open source! π₯³

Exciting news from #ICML2025 & #ICCV2025 π₯³ - π₯ VideoJAM accepted as *oral* at #ICML2025 (top 1%) - Two talks at #ICCV2025 βοΈinterpretability in the generative era βοΈvideo customization - Organizing two #ICCV2025 workshops βοΈstructural priors for vision βοΈlong video gen π§΅π

Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text. With: Neta Shaul Uriel Singer Yaron Lipman Link: arxiv.org/abs/2506.06215



Check out our team's latest work, led by Uriel Singer and Neta Shaul!

Introducing Transition Matching (TM) β a new generative paradigm that unifies Flow Matching and autoregressive models into one framework, boosting both quality and speed! Thank you for the great collaboration Neta Shaul Itai Gat Yaron Lipman



If you're curious to dive deeper into Transition Matching (TM)β¨π, a great starting point is understanding the similarities and differences between ππ’ππππ«ππ§ππ ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (πππ) and Flow Matching (FM)π‘.
