Ning Yu (@realningyu) 's Twitter Profile
Ning Yu

@realningyu

Lead Research Scientist at Netflix Eyeline Studios. Ex-Salesforce. Ex-NVIDIA. Ex-Adobe. Joint PhD at UMD & MPI. Leading efforts in visual and multimodal GenAI.

ID: 1204845487263752198

linkhttps://ningyu1991.github.io/ calendar_today11-12-2019 19:31:13

105 Tweet

828 Followers

368 Following

Ning Yu (@realningyu) 's Twitter Profile Photo

Thrilled to present 💡DifFRelight💡 at #SIGGRAPHAsia2024, a novel-view facial dynamic relighting framework based on image diffusion editing, dynamic Gaussian Splatting conditioning, integrated lighting control, and our in-house light stage hardware. Kudos to the team: Mingming

Thrilled to present 💡DifFRelight💡 at #SIGGRAPHAsia2024, a novel-view facial dynamic relighting framework based on image diffusion editing, dynamic Gaussian Splatting conditioning, integrated lighting control, and our in-house light stage hardware.
Kudos to the team: Mingming
Ning Yu (@realningyu) 's Twitter Profile Photo

Thrilled to present 🥷Shadowcast 🥷 at #NeurIPS2024 #vancouver, the first poisoning attack against #VLMs that generates coherent but mind-bending misinformation, underscoring the necessity for responsible and trustworthy #VLM. With Yuancheng Xu, Jiarui Yao, Manli Shu,

Thrilled to present 🥷Shadowcast 🥷 at #NeurIPS2024 #vancouver, the first poisoning attack against #VLMs that generates coherent but mind-bending misinformation, underscoring the necessity for responsible and trustworthy #VLM.
With <a href="/Yuancheng_Xu0/">Yuancheng Xu</a>, Jiarui Yao, <a href="/ManliShu/">Manli Shu</a>,
Ning Yu (@realningyu) 's Twitter Profile Photo

Open-source data & benchmarks drive AI innovation. We are thrilled to present 🚀T2Vs Meet VLMs🚀 at #neurips2024 D&B Track. We introduce a compressive image+video dataset spanning 10 harmfulness concepts, powered by advanced image/video diffusion models and vision-language agents

Open-source data &amp; benchmarks drive AI innovation. We are thrilled to present 🚀T2Vs Meet VLMs🚀 at #neurips2024 D&amp;B Track. We introduce a compressive image+video dataset spanning 10 harmfulness concepts, powered by advanced image/video diffusion models and vision-language agents
Ning Yu (@realningyu) 's Twitter Profile Photo

Happening today in 8h! We are presenting 🥷Shadowcast🥷 at #NeurIPS2024 #vancouver East Exhibit Hall A-C #4401, Dec 11th 11am-2pm PST. Shadowcast is the first poisoning attack against #VLMs that generates coherent but mind-bending misinformation, underscoring the necessity for

Happening today in 8h! We are presenting 🥷Shadowcast🥷 at #NeurIPS2024 #vancouver East Exhibit Hall A-C #4401, Dec 11th 11am-2pm PST. Shadowcast is the first poisoning attack against #VLMs that generates coherent but mind-bending misinformation, underscoring the necessity for
Ning Yu (@realningyu) 's Twitter Profile Photo

🗓️Happening today in 11h! We are presenting 🚀T2Vs Meet VLMs🚀 at #NeurIPS2024 #vancouver West Ballroom A-D #5101, Dec 12th 11am-2pm PST. Open-source data & benchmarks drive AI innovation. We introduce a compressive image+video dataset spanning 10 harmfulness concepts, powered by

🗓️Happening today in 11h! We are presenting 🚀T2Vs Meet VLMs🚀 at #NeurIPS2024 #vancouver West Ballroom A-D #5101, Dec 12th 11am-2pm PST. Open-source data &amp; benchmarks drive AI innovation. We introduce a compressive image+video dataset spanning 10 harmfulness concepts, powered by
Ning Yu (@realningyu) 's Twitter Profile Photo

Thrilled to present at #CVPR2025 the first reference-based 3D-aware image 🔥Triplane Editing🔥, enabling high-quality human/animal faces editing, full-body virtual try-on, and more. Our method achieves state-of-the-art results over latent, text & image-guided 2D/3D-aware GAN &

Ning Yu (@realningyu) 's Twitter Profile Photo

Excited to share our recent research 💡Lux Post Facto💡 will be presented at #CVPR2025! We’ve developed a new method to relight portrait videos in the wild with cinematic quality and temporal consistency — all in post-production. Big shoutout to Yiqun Mei and the amazing team at

Excited to share our recent research 💡Lux Post Facto💡 will be presented at #CVPR2025! We’ve developed a new method to relight portrait videos in the wild with cinematic quality and temporal consistency — all in post-production. Big shoutout to <a href="/myq_1997/">Yiqun Mei</a> and the amazing team at
Ning Yu (@realningyu) 's Twitter Profile Photo

The first project I led at Netflix Eyeline Studios is headed to #CVPR2025 with 5,5,4 review scores: 🌊Go-with-the-Flow🌊 warps noise for effortless motion control in video diffusion — no pipeline changes, same compute. Direct camera/object motion, transfer movement between

The first project I led at Netflix Eyeline Studios is headed to #CVPR2025 with 5,5,4 review scores: 🌊Go-with-the-Flow🌊 warps noise for effortless motion control in video diffusion — no pipeline changes, same compute. Direct camera/object motion, transfer movement between
Ning Yu (@realningyu) 's Twitter Profile Photo

If this kind of work excites you, we’re hiring Scanline VFX - Powered by Netflix Eyeline Studios! Let’s build AI tools together for filmmaking. Research intern (open to remote, not limited to summer): jobs.lever.co/scanlinevfx/2b… Research scientist (open to remote): jobs.lever.co/scanlinevfx/70…

Ning Yu (@realningyu) 's Twitter Profile Photo

📚New survey out📚: “Survey of Video Diffusion Models: Foundations, Implementations, and Applications”. 85 pages, 25K words, 500 references. The so-far most comprehensive, up-to-date, and fine-grained survey milestone in this fast-evolving field, from core techniques to

📚New survey out📚: “Survey of Video Diffusion Models: Foundations, Implementations, and Applications”. 85 pages, 25K words, 500 references. The so-far most comprehensive, up-to-date, and fine-grained survey milestone in this fast-evolving field, from core techniques to
Ning Yu (@realningyu) 's Twitter Profile Photo

Attending #CVPR2025 now in #Nashville — couldn’t wait to share what we’ve been working on at Eyeline Studios , and catch up with research crowd! ⭐ My slots @ Eyeline booth #1209 with "Go-with-the-Flow" live demos with Pablo Salamanca 🗓️ June 13th Fri 2-4pm CDT 🗓️ June 15th

Attending #CVPR2025 now in #Nashville — couldn’t wait to share what we’ve been working on at <a href="/eyelinestudios/">Eyeline Studios</a> , and catch up with research crowd!

⭐ My slots @ Eyeline booth #1209 with "Go-with-the-Flow" live demos with <a href="/pablosalamancal/">Pablo Salamanca</a> 
🗓️ June 13th Fri 2-4pm CDT
🗓️ June 15th
Ning Yu (@realningyu) 's Twitter Profile Photo

🌊Go-with-the-Flow🌊, motion controllable video diffusion models, is selected as the Oral presentation at #CVPR2025 (top 0.8%). Come to watch Ryan Burgert's presentation (happening in 5min) and stop by our poster for live demos and Q&A. Our live UI is also online (done by

🌊Go-with-the-Flow🌊, motion controllable video diffusion models, is selected as the Oral presentation at #CVPR2025 (top 0.8%). Come to watch <a href="/RyanBurgert/">Ryan Burgert</a>'s presentation (happening in 5min) and stop by our poster for live demos and Q&amp;A. Our live UI is also online (done by
Ning Yu (@realningyu) 's Twitter Profile Photo

Grateful to everyone who stopped by our oral presentation and posters during the whirlwind of #CVPR2025 — we know you had plenty of options! I'm at the Eyeline Studios Booth (1209) from now–3pm today. Come say hi — I’d love to chat about our research philosophy and how it ties

Grateful to everyone who stopped by our oral presentation and posters during the whirlwind of #CVPR2025 — we know you had plenty of options!

I'm at the <a href="/eyelinestudios/">Eyeline Studios</a> Booth (1209) from now–3pm today. Come say hi — I’d love to chat about our research philosophy and how it ties
Ning Yu (@realningyu) 's Twitter Profile Photo

🔊 #ICCV2025 acceptance: ⚡FlashDepth⚡ estimates accurate and consistent depth for 2K-resolution videos in a real-time (24 FPS) streaming fashion on a single A100 GPU. ✊ Kudos to the teamwork led by our intern Gene Chou at Eyeline Studios . 👉 Join us to be the next one

🔊 #ICCV2025 acceptance: ⚡FlashDepth⚡ estimates accurate and consistent depth for 2K-resolution videos in a real-time (24 FPS) streaming fashion on a single A100 GPU.
✊ Kudos to the teamwork led by our intern <a href="/gene_ch0u/">Gene Chou</a> at <a href="/eyelinestudios/">Eyeline Studios</a> .

👉 Join us to be the next one