Yan Zhang (@cnsdqzyz) 's Twitter Profile
Yan Zhang

@cnsdqzyz

Scientist@meshcapade, human foundation models, my own opinions.

ID: 1154344211422093312

linkhttps://yz-cnsdqz.github.io/ calendar_today25-07-2019 10:54:41

255 Tweet

1,1K Followers

1,1K Following

Andrew Davison (@ajddavison) 's Twitter Profile Photo

All researchers should fight against this. Every week I try to persuade my students that top papers often have few quantitative results. With work that's new, important, and clearly qualitatively different (zero to one!), you don't need quantitative results. Demos not tables!

Michael Black (@michael_j_black) 's Twitter Profile Photo

It’s Europe’s time to shine in AI. Meshcapade is a great example of AI technology invented in Europe and commercialized in Europe. The amazing Naureen Mahmood explains how we’re creating 3D virtual humans by analyzing the behavior of real humans at scale and how this technology

Meshcapade (@meshcapade) 's Twitter Profile Photo

Missed us at Game Developers Conference? Watch our full talk online! Part 1: Meshcapade + Unreal Engine - Motion capture with MoCapade: Our single camera mocap solution - Bring your motion straight into Unreal Engine - Seamless retargeting to any character in your 3D scene - Bonus content:

Meshcapade (@meshcapade) 's Twitter Profile Photo

Game Developers Conference Unreal Engine Part 2: Meshcapade + 𝔸𝕟𝕕𝕣𝕖𝕪 𝕍. 𝔸𝕟𝕥𝕠𝕟𝕠𝕧 - Create crowds of realistic humans with varying body shapes in any scene, on any terrain, with any motion - Generate 100s of character variations inside Houdini with our #SMPL model #GameDev #UnrealEngine #Houdini #Mocap #DigitalHumans

Meshcapade (@meshcapade) 's Twitter Profile Photo

Go behind the scenes with us at Game Developers Conference 2025! 🎮 Game Developers Conference From setup to showtime, here’s a look at how we brought our motion capture and digital human tech to life. We loved sharing our work, meeting so many brilliant minds, and seeing the excitement

Yan Zhang (@cnsdqzyz) 's Twitter Profile Photo

Thanks for posting! In short, it tries to estimate camera poses of small-baseline videos. Namely, the camera motion has small translations, which is common in e.g. tiktok. We find MonST3R does not perform well here, and DINOv2-powered Gaussian splatting is consistently helpful.

Qianli Ma (@qianli_m) 's Twitter Profile Photo

The 2nd 3D HUMANS workshop is back at #CVPR2025! 📍Join us on June 12 afternoon in Nashville for a 2025 perspective on 3D human perception, reconstruction & synthesis. 🖼️ Got a CVPR paper on 3D humans? Nominate it to be featured in our poster session! 👉 tinyurl.com/3d-humans-2025

The 2nd 3D HUMANS workshop is back at <a href="/CVPR/">#CVPR2025</a>!

📍Join us on June 12 afternoon in Nashville for a 2025 perspective on 3D human perception, reconstruction &amp; synthesis.

🖼️ Got a CVPR paper on 3D humans? Nominate it to be featured in our poster session! 👉 tinyurl.com/3d-humans-2025
Meshcapade (@meshcapade) 's Twitter Profile Photo

🔥 Heading to IEEE ICRA? 🎥🤖 Join our Senior ML Scientist, Muhammed Kocabas, at the Vision-Based Wearable Robotics Workshop to learn all about MoCapade. MoCapade captures full-body motion from any video for imitation learning and for generating synthetic data in simulation

Meshcapade (@meshcapade) 's Twitter Profile Photo

Listen up, #CVPR2025, here we come! 🎉 Papers submitted: 5 Papers accepted: 5 Papers being deployed into products: 5!! 🚀 Tiny team. Outsized impact 💯 Come talk to us about our 100% success rate and be the first to try the live demos of our latest hits ☺️☺️✨ Booth 1333, June

Listen up, <a href="/CVPR/">#CVPR2025</a>, here we come! 🎉

Papers submitted: 5
Papers accepted: 5
Papers being deployed into products: 5!! 🚀
Tiny team. Outsized impact 💯

Come talk to us about our 100% success rate and be the first to try the live demos of our latest hits ☺️☺️✨

Booth 1333, June
Meshcapade (@meshcapade) 's Twitter Profile Photo

Final video in our #CVPR2025 series: PICO 🤝📦 By Alpár Cseke, Shashank Tripathi, Sai Kumar Dwivedi, Arjun S. Lakshmipathy, Agniv Chatterjee, Michael J. Black, and Dimitrios Tzionas, in collaboration with the Max Planck Institute for Intelligent Systems (Intelligent Systems), Carnegie

Michael Black (@michael_j_black) 's Twitter Profile Photo

One tweet. All the CVPR2025 projects that I’m part of. In one convenient place. Come see us in Nashville! Conference papers: PromptHMR: Promptable Human Mesh Recovery yufu-wang.github.io/phmr-page/ DiffLocks: Generating 3D Hair from a Single Image using Diffusion Models

One tweet. All the CVPR2025 projects that I’m part of. In one convenient place. Come see us in Nashville!

Conference papers:

PromptHMR: Promptable Human Mesh Recovery
yufu-wang.github.io/phmr-page/

DiffLocks: Generating 3D Hair from a Single Image using Diffusion Models
Meshcapade (@meshcapade) 's Twitter Profile Photo

We’re bringing production-ready digital humans to #CVPR2025 2025. This isn’t research anymore. Meshcapade powers real-world pipelines across: 🤖 Robotics: safe HRI training and perception models 🧠 AI: synthetic data for CV model training 🏥 Healthcare: longitudinal body shape

Michael Black (@michael_j_black) 's Twitter Profile Photo

I’ll be giving several workshop talks at CVPR this year. Because I'm a gluten for punishment, there will be very little overlap between most of these talks. I’ll cover several of our CVPR2025 papers including PromptHMR, PICO, InteractVLM, ChatHuman, ChatGarment, DiffLocks as

Meshcapade (@meshcapade) 's Twitter Profile Photo

Where digital humans are going beyond research → into production: 🤖 Robotics, synthetic HRI data for safety training 🧠 AI Labs, synthetic human data for vision models 🏥 Healthcare, longitudinal body shape tracking for telehealth 🎮 Gaming & XR, real-time 3D mocap and motion

Qianli Ma (@qianli_m) 's Twitter Profile Photo

If you're at #CVPR2025, come by the Workshop on 3D Human Understanding tomorrow and meet all our amazing speakers in person! 🕑 June 12, 1:50 PM 📍 Room 110b 👤 Angjoo Kanazawa Michael Black Gerard Pons-Moll Sebastian Starke Hanbyul (Han) Joo 🌐 tinyurl.com/3d-humans-2025

If you're at #CVPR2025, come by the Workshop on 3D Human Understanding tomorrow and meet all our amazing speakers in person!
🕑 June 12, 1:50 PM
📍 Room 110b
👤 <a href="/akanazawa/">Angjoo Kanazawa</a> <a href="/Michael_J_Black/">Michael Black</a> <a href="/GerardPonsMoll1/">Gerard Pons-Moll</a> <a href="/blacksquirrel__/">Sebastian Starke</a> <a href="/jhugestar/">Hanbyul (Han) Joo</a> 
🌐 tinyurl.com/3d-humans-2025
Meshcapade (@meshcapade) 's Twitter Profile Photo

The #CVPR2025 EXPO starts TOMORROW. 👉 Real-time 3D motion capture and motionavatar  generation demos 👉 SMPL developer cheat sheets 👉 Synthetic data pipelines for AI & perception teams 👉 Full on-prem deployment capabilities ➔ Booth 1333 #CVPR2025— Meshcapade — let’s meet

Meshcapade (@meshcapade) 's Twitter Profile Photo

Farewell #CVPR2025, it was fantastic being back!! Huge thanks to everyone who stopped by at the Meshcapade booth, our posters presentations and workshops 🥰 Still can’t believe our tiny team of intrepid scientific explorers ended up being part of ALL of that! 🚀 We had incredible

Michael Black (@michael_j_black) 's Twitter Profile Photo

Physical intelligence for humanist robots. At Meshcapade we've built the foundational technology for the capture, generation, and understanding of human motion. This blog post explains how this enables robot learning at scale. medium.com/@black_51980/p… perceiving-systems.blog/en/news/toward…