Ziyi Wu (@dazitu_616) 's Twitter Profile
Ziyi Wu

@dazitu_616

CS PhD @UofT | Research Intern @Snap | Prev Student Researcher @Google, Undergrad @Tsinghua_Uni

ID: 1419334664464961540

linkhttps://wuziyi616.github.io/ calendar_today25-07-2021 16:32:19

139 Tweet

864 Followers

452 Following

Koichi Namekata (@koichi_n_) 's Twitter Profile Photo

Thrilled to announce that SG-I2V has been accepted at #ICLR2025 ! Huge thanks to the collaborators, reviewers, and ACs. Looking forward to presenting this in Singapore!

Yash Kant (@yash2kant) 's Twitter Profile Photo

🚀 Introducing Pippo – our diffusion transformer pre-trained on 3B Human Images and post-trained with 400M high-res studio images! ✨Pippo can generate 1K resolution turnaround video from a single iPhone photo! 🧵👀 Full deep dive thread coming up next!

Yash Kant (@yash2kant) 's Twitter Profile Photo

📢📢 Last week, we announced Pippo - a DiT that generates 1K res. turnarounds from a single iPhone photo (even occluded ones)! Here’s the deep dive thread unpacking everything we learned! ⬇️

Ivan Skorokhodov (@isskoro) 's Twitter Profile Photo

In the past 1.5 weeks, there appeared 2 papers by 2 different research groups which develop the exactly same (and embarrassingly simple) trick to improve convergence of image/video diffusion models by 20-100+% (sic!) arxiv.org/abs/2502.14831 arxiv.org/abs/2502.09509

In the past 1.5 weeks, there appeared 2 papers by 2 different research groups which develop the exactly same (and embarrassingly simple) trick to improve convergence of image/video diffusion models by 20-100+% (sic!)

arxiv.org/abs/2502.14831
arxiv.org/abs/2502.09509
Igor Gilitschenski (@igilitschenski) 's Twitter Profile Photo

I am excited to share that my students Kai He, Yash Kant, Ziyi Wu @ CVPR'25, and Toshiya Yura, our previous research visitor from Sony, will present papers at #CVPR2025. 🎉 Check out their amazing work! 1/🧵

Xuanchi Ren (@xuanchi13) 's Twitter Profile Photo

🚀Excited to introduce GEN3C #CVPR2025, a generative video model with an explicit 3D cache for precise camera control. 🎥It applies to multiple use cases, including single-view and sparse-view NVS🖼️ and challenging settings like monocular dynamic NVS and driving simulation🚗.

Yang Zheng (@yang_zheng18) 's Twitter Profile Photo

Can we reconstruct relightable human hair appearance from real-world visual observations? We introduce GroomLight, a hybrid inverse rendering method for relightable human hair appearance modeling. syntec-research.github.io/GroomLight/

Koichi Namekata (@koichi_n_) 's Twitter Profile Photo

I'll be presenting our #ICLR2025 poster "SG-I2V: Self-Guided Trajectory Control in Image-to-Video Generation" (#189, Thu afternoon), trajectory conditioned i2v generation w/o fine-tuning. Feel free to drop by if you are interested in exploring the zero-shot capabilities of VDMs!

I'll be presenting our #ICLR2025 poster "SG-I2V: Self-Guided Trajectory Control in Image-to-Video Generation" (#189, Thu afternoon), trajectory conditioned i2v generation w/o fine-tuning.  Feel free to drop by if you are interested in exploring the zero-shot capabilities of VDMs!
Kai He (@kai__he) 's Twitter Profile Photo

🚀 Introducing UniRelight, a general-purpose relighting framework powered by video diffusion models. 🌟UniRelight jointly models the distribution of scene intrinsics and illumination, enabling high-quality relighting and intrinsic decomposition from a single image or video.

LiSA (@lisa_olive) 's Twitter Profile Photo

Thanks NYC 🗽🍎 It was my first time show in New York! Thank you for singing so many songs with me♡ Today was Another Great Day!! #LiSA_NATOUR Next in NYC day2✌︎ See U tomorrow〜🗽♥️