Zhengfei Kuang (@zfkuang1) 's Twitter Profile
Zhengfei Kuang

@zfkuang1

Ph.D. student at Stanford University

ID: 1158430116998811652

linkhttp://zhengfeikuang.com calendar_today05-08-2019 17:30:37

15 Tweet

151 Followers

90 Following

Jiaman Li (@jiaman01) 's Twitter Profile Photo

Check out our SIGGRAPH ASIA 2020 Paper: Dynamic Facial Asset and Rig Generation from a Single Scan! Great collaboration with Zhengfei Kuang, Real Yajie Zhao, Mingming He, Karl Bladin, and Hao Li! Demo: youtube.com/watch?v=VV655f… Paper: arxiv.org/pdf/2010.00560…

Jiaman Li (@jiaman01) 's Twitter Profile Photo

#CVPR Can we estimate human motion from egocentric videos? Check out our Ego-Body Pose Estimation via Ego-Head Pose Estimation (Award Candidate)! w Karen and Jiajun Wu. Poster: THU-AM-063. (I will be on Zoom for Q&A!) Project page: (code is released!) lijiaman.github.io/projects/egoeg…

Yunzhi Zhang (@zhang_yunzhi) 's Twitter Profile Photo

Check out Stanford-ORB --- a real-world 3D object inverse rendering benchmark in Tue afternoon #NeurIPS2023! Dataset & Code available at stanfordorb.github.io.

Check out Stanford-ORB --- a real-world 3D object inverse rendering benchmark in Tue afternoon #NeurIPS2023! Dataset & Code available at stanfordorb.github.io.
AK (@_akhaliq) 's Twitter Profile Photo

Collaborative Video Diffusion Consistent Multi-video Generation with Camera Control Research on video generation has recently made tremendous progress, enabling high-quality videos to be generated from text prompts or images. Adding control to the video generation

Shengqu Cai (@prime_cai) 's Twitter Profile Photo

We've been exploring a fun and challenging idea: combining top-notch video generation models with multi-view capabilities. This seems like a natural next step after all the amazing video and multi-image generation models. The question was, could we create a multi-view video

Jiaman Li (@jiaman01) 's Twitter Profile Photo

#ECCV24 We introduce Controllable Human-Object Interaction Synthesis (CHOIS) to generate interactions given text in a 3D scene. w alex, Roozbeh Mottaghi Jiajun Wu Xavier Puig Karen. code: github.com/lijiaman/chois… project page: lijiaman.github.io/projects/chois/

Zhengfei Kuang (@zfkuang1) 's Twitter Profile Photo

Excited to present our work at NeurIPS 2024! 📍 East Exhibit Hall A-C #1803, ⏰ December 11th, 11 AM - 2 PM With multi-view video diffusion models gaining significant traction recently, we're proud to revisit our pioneering work from earlier this year: Collaborative Video

Jiaman Li (@jiaman01) 's Twitter Profile Photo

🔥 Introducing MVLift: Generate realistic 3D motion without any 3D training data - just using 2D poses from monocular videos! Applicable to human motion, human-object interaction & animal motion. Joint work w/ Jiajun Wu & Karen 💡 How? We reformulate 3D motion estimation as

Jiaman Li (@jiaman01) 's Twitter Profile Photo

🤖 Introducing Human-Object Interaction from Human-Level Instructions! First complete system that generates physically plausible, long-horizon human-object interactions with finger motions in contextual environments, driven by human-level instructions. 🔍 Our approach: - LLMs