Yuke Zhu (@yukez) 's Twitter Profile
Yuke Zhu

@yukez

Assistant Professor @UTCompSci | Co-Leading GEAR @NVIDIAAI | CS PhD @Stanford | Building generalist robot autonomy in the wild | Opinions are my own

ID: 15751831

linkhttps://yukezhu.me calendar_today06-08-2008 16:25:20

335 Tweet

18,18K Followers

461 Following

Mingyo Seo (@kiwi_sherbet) 's Twitter Profile Photo

Humans, regardless of arm size or handedness, can fasten screws with the same tool: a screwdriver. What if robots could share tools—and skills too? Introducing LEGATO, a cross-embodiment learning framework using a handheld gripper! 🌐 ut-hcrl.github.io/LEGATO 👇 See more🧵(1/5)

Yuke Zhu (@yukez) 's Twitter Profile Photo

For the past two years, Jake Grigsby and I have been exploring how to make Transformer-based RL scales the same way as supervised learning counterparts. Our AMAGO line of work shows promise for building RL generalists in multi-task settings. Meet him and chat at #NeurIPS2024!

Yuke Zhu (@yukez) 's Twitter Profile Photo

We are accepting research proposals to accelerate Robotics + AI through the NVIDIA Academic Grant Program. Our Edge AI call is seeking projects on GPU simulations, learning-based control, and foundation models for humanoid robotics. Apply by March 31: nvda.ws/3ZNxzuW

Toru (@toruo_o) 's Twitter Profile Photo

Sim2Real RL for Vision-Based Dexterous Manipulation on Humanoids toruowo.github.io/recipe/ TLDR - we train a humanoid robot with two multifingered hands to perform a range of dexterous manipulation tasks robust generalization and high performance without human demonstration :D

Yuke Zhu (@yukez) 's Twitter Profile Photo

Sharing the slide deck and video recording of my talk "Data Pyramid and Data Flywheel for Robotic Foundation Models" at Princeton Robotics Symposium last November. I discussed the vision of training foundation models on diverse data sources and refining them during deployments.

Sharing the slide deck and video recording of my talk "Data Pyramid and Data Flywheel for Robotic Foundation Models" at Princeton Robotics Symposium last November. I discussed the vision of training foundation models on diverse data sources and refining them during deployments.
Yuke Zhu (@yukez) 's Twitter Profile Photo

Our DexMimicGen work on automated robotic data generation is officially accepted at #ICRA2025! We released our simulations, datasets, and training scripts for reproducibility. Check them out here: github.com/NVlabs/dexmimi…

NVIDIA Newsroom (@nvidianewsroom) 's Twitter Profile Photo

At #GTC25: NVIDIA announced the Isaac GR00T N1 model, the world's first open, customizable foundation model to bring generalized skills and reasoning to #humanoidrobots. Available now to #robotics developers worldwide. ➡️ nvda.ws/41YHT2l

At #GTC25: NVIDIA announced the Isaac GR00T N1 model, the world's first open, customizable foundation model to bring generalized skills and reasoning to #humanoidrobots.

Available now to #robotics developers worldwide. ➡️ nvda.ws/41YHT2l
Yuke Zhu (@yukez) 's Twitter Profile Photo

Thrilled to announce GR00T N1, our open foundation model for generalist humanoid robots! GR00T N1 adopts a dual-system design, leverages the entire data pyramid for model training, and supports various robot embodiments. GR00T N1 embodies years of fundamental research, spanning

RoboPapers (@robopapers) 's Twitter Profile Photo

Full episode dropping tomorrow! Geeking out with Toru on toruowo.github.io/recipe/ (Sim-to-Real Reinforcement Learning for Vision-Based Dexterous Manipulation on Humanoids). Co-hosted by Chris Paxton & Michael Cho - Rbt/Acc

Yuke Zhu (@yukez) 's Twitter Profile Photo

The research collaboration between NVIDIA GEAR and 1X has been a blast! I am amazed by how quiet, compliant, and friendly the Neo robot is. I look forward to seeing more incredible things we will build together. 1x.tech/discover/1X-NV…

Yuke Zhu (@yukez) 's Twitter Profile Photo

Sim-and-real co-training is the key technique behind GR00T's ability to learn across the data pyramid. Our latest study shows how synthetic and real-world data can be jointly leveraged to train robust, generalizable vision-based manipulation policies. 📚 co-training.github.io

Sim-and-real co-training is the key technique behind GR00T's ability to learn across the data pyramid. Our latest study shows how synthetic and real-world data can be jointly leveraged to train robust, generalizable vision-based manipulation policies.

📚 co-training.github.io
Chris Paxton (@chris_j_paxton) 's Twitter Profile Photo

I really like the term "digital cousin" instead of digital twin. Lots of things will likely never be the same between sim and real data, which is why they're complementary

Yuke Zhu (@yukez) 's Twitter Profile Photo

We took a short break from robotics to build a human-level agent to play Competitive Pokémon. Partially observed. Stochastic. Long-horizon. Now mastered with Offline RL + Transformers. Our agent, trained on 475k+ human battles, hits the top 10% on Pokémon Showdown leaderboards.

Yuke Zhu (@yukez) 's Twitter Profile Photo

Excited to see Jensen share our recent progress on the N1.5 foundation model and GR00T Dreams from the NVIDIA GEAR team! Several team members and I will be at #ICRA2025. Come chat with us about building generalist robot autonomy and ways to work together on this grand mission!

Joel Jang (@jang_yoel) 's Twitter Profile Photo

Introducing 𝐃𝐫𝐞𝐚𝐦𝐆𝐞𝐧! We got humanoid robots to perform totally new 𝑣𝑒𝑟𝑏𝑠 in new environments through video world models. We believe video world models will solve the data problem in robotics. Bringing the paradigm of scaling human hours to GPU hours. Quick 🧵

Chi Jin (@chijinml) 's Twitter Profile Photo

Really excited to share that our Pokémon competition is accepted at NeurIPS 2025!🔥 Get ready for an epic AI challenge featuring both speedruns🎮 and multi-player Pokémon battles⚔️---more details coming soon!

Really excited to share that our Pokémon competition is accepted at NeurIPS 2025!🔥 Get ready for an epic AI challenge featuring both speedruns🎮 and multi-player Pokémon battles⚔️---more details coming soon!
Chris Paxton (@chris_j_paxton) 's Twitter Profile Photo

Using simulation data is going to be an important part of how we train general-purpose robot models for real-world applications. But how can we best leverage simulation, especially since building simulation is hard? In this work from the NVIDIA GEAR lab (and others!), they

Huihan Liu (@huihan_liu) 's Twitter Profile Photo

Meet Casper👻, a friendly robot sidekick who shadows your day, decodes your intents on the fly, and lends a hand while you stay in control! Instead of passively receiving commands, what if a robot actively sense what you need in the background, and step in when confident? (1/n)

Joel Jang (@jang_yoel) 's Twitter Profile Photo

🚀 GR00T Dreams code is live! NVIDIA GEAR Lab's open-source solution for robotics data via video world models. Fine-tune on any robot, generate 'dreams', extract actions with IDM, and train visuomotor policies with LeRobot datasets (GR00T N1.5, SmolVLA). github.com/NVIDIA/GR00T-D…