Chen Tessler (@chentessler) 's Twitter Profile
Chen Tessler

@chentessler

Research Scientist @NVIDIAAI ; Training simulated robots 🤖 ; Reinforcement learning PhD @TechnionLive 🇮🇱. Views are my own.

ID: 532126121

linkhttp://chen.tessler.tilda.ws/ calendar_today21-03-2012 10:52:50

1,1K Tweet

1,1K Followers

592 Following

Chen Tessler (@chentessler) 's Twitter Profile Photo

Early termination with AMP (adversarial motion priors) -- any scheme that works for all motions? Early termination is crucial to ensure the policy doesn't waste time in irrelevant regions. With proper early termination, you can learn in 1-2 hours. Without it... might fail even

Chen Tessler (@chentessler) 's Twitter Profile Photo

Just like deep mimic, these methods from "animation" will slowly creep into robotics. First show it works in sim, then handle the sim to real challenges. Amazing results Liang Pan !

Chen Tessler (@chentessler) 's Twitter Profile Photo

I got the G1 to cartwheel (in sim). But not to perform a back-handspring yet. Probably a combination of algorithm (reward/termination/ structure and/or L2-tracking vs discriminative) and retargeting not fine-tuned well enough. I'd bet the real G1 can't cartwheel on the static

Siyuan Huang (@siyuanhuang95) 's Twitter Profile Photo

📢📢📢 Excited to release ManipTrans: Efficient Dexterous Bimanual Manipulation Transfer via Residual Learning (CVPR25). 🤏🤙✌️With ManipTrans, we can transfer dexterous manipulation skills into robotic hands in simulation and deploy them on a real robot, using a residual policy

Chen Tessler (@chentessler) 's Twitter Profile Photo

The G1 with a demo walking motion is now live in github.com/nvlabs/protomo… Currently with the fixed hand. Will eventually integrate more "forms" (no hand, actuated hand, etc...).

Chen Tessler (@chentessler) 's Twitter Profile Photo

Another robotics company is out in the open! Refine are focusing on industrial use cases. Succeeds in super precise tasks, such as board assembly. Their solution is already deployed and working, fully autonomous, in real production lines. youtube.com/watch?v=d_0f_H…

Chen Tessler (@chentessler) 's Twitter Profile Photo

Data distribution matters. Even in imitation learning. 30-40% speedup by ensuring proper data coverage. Training a motion tracker to reconstruct the entire AMASS dataset. When it fails, what should you do? Turns out that if you re-sample the motion & time you're introducing

Data distribution matters. Even in imitation learning.
30-40% speedup by ensuring proper data coverage.

Training a motion tracker to reconstruct the entire AMASS dataset. When it fails, what should you do?

Turns out that if you re-sample the motion & time you're introducing
Chen Tessler (@chentessler) 's Twitter Profile Photo

I think the policy I'm currently training has reached 100% tracking success rate on the AMASS test set (small tweaks here and there, nothing fancy). So either we need to re-define what success means (tighter tracking?). and/or we need harder motions to evaluate on.

Zhengyi “Zen” Luo (@zhengyiluo) 's Twitter Profile Photo

🎓 Excited to defend my PhD thesis “Learning Universal Humanoid Control” at CMU this Friday! From scalable motion imitators to visual dexterous whole-body policies — it’s been a wild ride 🤖✨ 📅 April 25, 2025 📍 CMU RI & online 🔗 cs.cmu.edu/calendar/18255…

Edward Johns (@ed__johns) 's Twitter Profile Photo

Vitalis (Vitalis Vosylius) and I were really honoured today to win the Best Paper Award at the ICLR 2025 Robot Learning Workshop, for our paper “Instant Policy”. The video below shows Instant Policy in action… A single demonstration is all you need! See: tinyurl.com/4e3vn4x3

Yunzhu Li (@yunzhuliyz) 's Twitter Profile Photo

Back from an exciting week at #ICLR2025 in Singapore! Huge thanks to Lin Shao @ NUS for hosting me--had a blast sharing thoughts on how foundation models can (and can’t yet) power robotic manipulation. 🎥 Talk here: youtu.be/wGwHht1o0M0

Back from an exciting week at #ICLR2025 in Singapore!

Huge thanks to <a href="/linshaonju/">Lin Shao</a> @ NUS for hosting me--had a blast sharing thoughts on how foundation models can (and can’t yet) power robotic manipulation.

🎥 Talk here: youtu.be/wGwHht1o0M0
Chen Tessler (@chentessler) 's Twitter Profile Photo

michelerocca.github.io/projects/polic… Really cool work on combining reference motions with hypernetworks 🤯 1. Train a tiny policy for each individual motion (33k params each). 2. Train conditional diffusion model to predict the policy weights.

Arthur Allshire (@arthurallshire) 's Twitter Profile Photo

our new system trains humanoid robots using data from cell phone videos, enabling skills such as climbing stairs and sitting on chairs in a single policy (w/ Hongsuk Benjamin Choi Junyi Zhang David McAllister)

Michael Xu (@mxu_cg) 's Twitter Profile Photo

Interested in simulated characters traversing complex terrains? PARC: Physics-based Augmentation with Reinforcement Learning for Character Controllers Project page: michaelx.io/parc/index.html with: Yi Shi, KangKang Yin, and Jason Peng ACM SIGGRAPH 2025 Conference Paper 1/