Amit Zohar (@amit_zhr) 's Twitter Profile
Amit Zohar

@amit_zhr

AI Researcher @ Meta AI

ID: 1066040759588130817

calendar_today23-11-2018 18:48:17

54 Tweet

197 Followers

98 Following

Shelly Sheynin (@shellysheynin) 's Twitter Profile Photo

I’m thrilled and proud to share our model, Movie Gen, that we've been working on for the past year, and in particular, Movie Gen Edit, for precise video editing. 😍 Look how Movie Gen edited my video!

Yuval Kirstain (@ykirstain) 's Twitter Profile Photo

So proud to be part of the Movie Gen project, pushing GenAI boundaries! Two key insights: 1. Amazing team + high-quality data + clean, scalable code + general architecture + GPUs go brr = SOTA video generation. 2. Video editing *without* supervised data: train a *single* model

Ishan Misra (@imisra_) 's Twitter Profile Photo

We released 92 pages worth of detail including how to benchmark these models! Super critical for the scientific progress in this field :) We'll also release evaluation benchmarks next week to help the research community 💪

Adam Polyak (@adam_polyak90) 's Twitter Profile Photo

Excited to share our progress on Movie Gen, a SOTA model for video generation! 🎥✨ I worked on this project as part of a cutting-edge team 🔥, pushing the boundaries of video editing ✂️— all without supervised data. Can’t wait to show you what’s next! 🚀🎬

Andrew Brown (@andrew__brown__) 's Twitter Profile Photo

So how did we get to these amazing videos for Meta Movie Gen? One of the things I’m proudest of is that we released a very detailed technical report (ai.meta.com/research/movie……) Lets dive into a technical summary of what we did & learnt 🧵 1/n x.com/AIatMeta/statu…

Kevin Chih-Yao Ma (@chihyaoma) 's Twitter Profile Photo

Movie Gen claims to be the state-of-the-art in text-to-video generation, outperforming Sora, Kling, Gen3, and more. But how can you trust the results? Today, we're releasing 1003 videos and their prompts - no cherry-picking allowed. Our goal? To set a new standard for evaluating

Ishan Misra (@imisra_) 's Twitter Profile Photo

Two exciting updates on Movie Gen (1) MovieGenBench containing thousands of *random* generations for benchmarking for video/audio tasks :) (2) Folks in Hollywood (Casey Affleck, Blumhouse productions) took Movie Gen for a spin: ai.meta.com/blog/movie-gen…

Danny Trinh (@dtrinh) 's Twitter Profile Photo

VERY excited about the era of generative AR we're bringing to life. Check out this preview! It's early but so damn promising — this isn't "AI slop"... it's unlocking Creators' imaginations on their own videos. Change your wardrobe, scene, lighting etc. with little expertise. PS

Amit Zohar (@amit_zhr) 's Twitter Profile Photo

🚀 Our latest work, VideoJAM, introduces a new method to enhance motion in any T2V model, significantly improving its motion and physics. We also train a DiT model that, combined with VideoJAM, achieves a new SOTA in motion generation! 🔥 hila-chefer.github.io/videojam-paper…

AK (@_akhaliq) 's Twitter Profile Photo

Meta just dropped VideoJAM Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models comparison with openai sora and kling

Lucas Beyer (bl16) (@giffmana) 's Twitter Profile Photo

This is extremely cool! They find diffusion loss is not very sensitive to motion. Thus they fine-tune videogen models with additional explicit motion prediction, making the model generate much more coherent videos. Also, Hila has been doing consistently good work, follow her!

Roberta Raileanu (@robertarail) 's Twitter Profile Photo

Super excited to share 🧠MLGym 🦾 – the first Gym environment for AI Research Agents 🤖🔬 We introduce MLGym and MLGym-Bench, a new framework and benchmark for evaluating and developing LLM agents on AI research tasks. The key contributions of our work are: 🕹️ Enables the

Michael Hassid (@michaelhassid) 's Twitter Profile Photo

The longer reasoning LLM thinks - the more likely to be correct, right? Apparently not. Presenting our paper: “Don’t Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning”. Link: arxiv.org/abs/2505.17813 1/n

The longer reasoning LLM thinks - the more likely to be correct, right?

Apparently not.

Presenting our paper: “Don’t Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning”.

Link: arxiv.org/abs/2505.17813

1/n