Erwin Coumans 🇺🇦 (@erwincoumans) 's Twitter Profile
Erwin Coumans 🇺🇦

@erwincoumans

NVIDIA, Physics Simulation,Robotics Learning

ID: 2409516403

calendar_today24-03-2014 20:21:50

2,2K Tweet

5,5K Followers

165 Following

Zhou Xian (@zhou_xian_) 's Twitter Profile Photo

We’re excited to share some updates on Genesis since its release: 1. We made a detailed report on benchmarking Genesis's speed and its comparison with other simulators (github.com/zhouxian/genes…) 2. We’ve launched a Discord channel and a WeChat group to foster communications

Unitree (@unitreerobotics) 's Twitter Profile Photo

Unitree G1 Bionic: Agile Upgrade 🥰 Unitree rolls out frequent updates nearly every month. This time, we present to you the smoothest walking and humanoid running in the world. We hope you like it. #Unitree #AGI #EmbodiedAI #AI #Humanoid #Bipedal #WorldModel

DeepSeek (@deepseek_ai) 's Twitter Profile Photo

🚀 DeepSeek-R1 is here! ⚡ Performance on par with OpenAI-o1 📖 Fully open-source model & technical report 🏆 MIT licensed: Distill & commercialize freely! 🌐 Website & API are live now! Try DeepThink at chat.deepseek.com today! 🐋 1/n

🚀 DeepSeek-R1 is here!

⚡ Performance on par with OpenAI-o1
📖 Fully open-source model & technical report
🏆 MIT licensed: Distill & commercialize freely!

🌐 Website & API are live now! Try DeepThink at chat.deepseek.com today!

🐋 1/n
Ziwen Zhuang (@ziwenzhuang_leo) 's Twitter Profile Photo

Embrace Collisions: Humanoid Shadowing for Deployable Contact-Agnostics Motions Humanoid keeps its torso upstraight for too long. It should be able to contact environments with all its body parts. However, MPC-based planning and sim-to-real methods often fail on deployment. (1/3)

Erwin Coumans 🇺🇦 (@erwincoumans) 's Twitter Profile Photo

A first test using the Alt-Bionics Surge robotics hands on the Unitree G1 humanoid, controlled by the MANUS™ MetaGloves pro. Worked out-of-the-box. Thanks Ryan and team and Maarten. youtube.com/shorts/wVI2IAa…

Erwin Coumans 🇺🇦 (@erwincoumans) 's Twitter Profile Photo

Scalable Real-to-Sim: Automated scanning of objects using camera+BundleSDF and Robot Arm measuring its inertial parameters, this is so cool! scalable-real2sim.github.io (thanks Bowen Wen for the link!)

NVIDIA Robotics (@nvidiarobotics) 's Twitter Profile Photo

Register for the Google DeepMind session at #GTC25 to learn how Google integrates core NVIDIA technologies like Warp to improve simulation performance for #humanoid robotics development. Register Now ➡️ nvda.ws/3Xya1YM

Carolina Parada (@parada_car88104) 's Twitter Profile Photo

📣MuJoCo announcement 📣 Thrilled to share that Google DeepMind has unveiled MuJoCo-Warp at NVIDIA's #GTC25! 🚀 We've expanded our open-source MuJoCo simulator with MuJoCo-Warp, leveraging NVIDIA’s Warp framework for incredible acceleration. This marks a significant step in

Boston Dynamics (@bostondynamics) 's Twitter Profile Photo

Atlas is demonstrating reinforcement learning policies developed using a motion capture suit. This demonstration was developed in partnership with Boston Dynamics and RAI Institute.

NVIDIA AI Developer (@nvidiaaidev) 's Twitter Profile Photo

Spatial AI is increasingly important, and the newest papers from #NVIDIAResearch, 3DGRT and 3DGUT, represent significant advancements in enabling researchers and developers to explore and innovate with 3D Gaussian Splatting techniques. 💎 3DGRT (Gaussian Ray Tracing) ➡️

Roger Qiu (@rogerqiu_42) 's Twitter Profile Photo

Diverse training data leads to a more robust humanoid manipulation policy, but collecting robot demonstrations is slow. Introducing our latest work, Humanoid Policy ~ Human Policy. We advocate human data as a scalable data source for co-training egocentric manipulation policy.⬇️

Erwin Coumans 🇺🇦 (@erwincoumans) 's Twitter Profile Photo

My G1 humanoid home setup to test NVIDIA's new GROOT-N1 model, led by Jim Fan and Yuke Zhu . Sitting in a wheel chair to focus on manipulation, with a test data set on Huggingface: huggingface.co/spaces/lerobot… nvidianews.nvidia.com/news/nvidia-is… With Alt-Bionics hands and MANUS™ gloves.

My G1 humanoid home setup to test NVIDIA's new GROOT-N1 model, led by <a href="/DrJimFan/">Jim Fan</a> and <a href="/yukez/">Yuke Zhu</a> . Sitting in a wheel chair to focus on manipulation, with a test data set on Huggingface: huggingface.co/spaces/lerobot…

nvidianews.nvidia.com/news/nvidia-is…
With <a href="/AltBionics/">Alt-Bionics</a> hands and <a href="/ManusMeta/">MANUS™</a> gloves.
Yuke Zhu (@yukez) 's Twitter Profile Photo

Sim-and-real co-training is the key technique behind GR00T's ability to learn across the data pyramid. Our latest study shows how synthetic and real-world data can be jointly leveraged to train robust, generalizable vision-based manipulation policies. 📚 co-training.github.io

Sim-and-real co-training is the key technique behind GR00T's ability to learn across the data pyramid. Our latest study shows how synthetic and real-world data can be jointly leveraged to train robust, generalizable vision-based manipulation policies.

📚 co-training.github.io
Xuxin Cheng (@xuxin_cheng) 's Twitter Profile Photo

Meet 𝐀𝐌𝐎 — our universal whole‑body controller that unleashes the 𝐟𝐮𝐥𝐥  kinematic workspace of humanoid robots to the physical world. AMO is a single policy trained with RL + Hybrid Mocap & Trajectory‑Opt. Accepted to #RSS2025. Try our open models & more 👉

John Carmack (@id_aa_carmack) 's Twitter Profile Photo

The full video of my Upper Bound 2025 talk about our research directions should be available at some point, but here are my slides: docs.google.com/presentation/d… And here are the notes I made while preparing, which are more extensive than what I had time to say:

Benjamin F Spector (@bfspector) 's Twitter Profile Photo

(1/5) We’ve never enjoyed watching people chop Llamas into tiny pieces. So, we’re excited to be releasing our Low-Latency-Llama Megakernel! We run the whole forward pass in single kernel. Megakernels are faster & more humane. Here’s how to treat your Llamas ethically: (Joint

(1/5) We’ve never enjoyed watching people chop Llamas into tiny pieces.

So, we’re excited to be releasing our Low-Latency-Llama Megakernel! We run the whole forward pass in single kernel.

Megakernels are faster &amp; more humane. Here’s how to treat your Llamas ethically:

(Joint
NVIDIA Robotics (@nvidiarobotics) 's Twitter Profile Photo

Join experts Yuval Tassa from Google DeepMind and Miles Macklin from NVIDIA at #GTCParis to learn about Newton, an open-source, extensible physics engine for robotics simulation, co-developed by Disney Research, Google DeepMind and NVIDIA. Gain insights into breakthroughs in

Adithya Murali (@adithya_murali_) 's Twitter Profile Photo

I’m thrilled to announce that we just released GraspGen, a multi-year project we have been cooking at NVIDIA Robotics 🚀 GraspGen: A Diffusion-Based Framework for 6-DOF Grasping Grasping is a foundational challenge in robotics 🤖 — whether for industrial picking or