Nicklas Hansen (@ncklashansen) 's Twitter Profile
Nicklas Hansen

@ncklashansen

PhD student @UCSanDiego. @nvidia fellow. Prev: @MetaAI, @UCBerkeley, @DTU_Compute.

ID: 1034707719637422081

linkhttps://www.nicklashansen.com calendar_today29-08-2018 07:41:59

292 Tweet

2,2K Followers

644 Following

Yuzhe Qin (@qinyuzhe) 's Twitter Profile Photo

Exciting new work from Nicklas Nicklas Hansen to extend the model-based RL to hierarchical settings. We may expect more powerful capability for the model-based approach.

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

TD-MPC2 now is applied to Visual Whole-Body Control for Humanoids! Our hierarchical TD-MPC2 generates more natural motion with skill imitation, but also look at how funny raw TD-MPC2 results are.

Tongzhou Mu πŸ€– @ ICLR 2025 (@tongzhou_mu) 's Twitter Profile Photo

High-level "puppeteer" + low-level "tracking" models = Visual Whole-Body Control with Natural Behaviors πŸ”₯Do check it out if you are interested in humanoid control!

An-Chieh Cheng (@anjjei) 's Twitter Profile Photo

🌟Introducing "πŸ€–SpatialRGPT: Grounded Spatial Reasoning in Vision Language Model" anjiecheng.me/SpatialRGPT SpatialRGPT is a powerful region-level VLM that can understand both 2D and 3D spatial arrangements. It can process any region proposal (e.g., boxes or masks) and provide

Ignat Georgiev (@imgeorgiev) 's Twitter Profile Photo

We have a new ICML paper! Adaptive Horizon Actor Critic (AHAC). Joint work with Krishnan Srinivasan Jie Xu Eric Heiden Animesh Garg AHAC is a first-order model-based RL algorithm that learns high-dimensional tasks in minutes and outperforms PPO by 40%. 🧡(1/4)

Remi Cadene (@remicadene) 's Twitter Profile Photo

Extremely lucky to host Nicklas Hansen next wednesday at our paper reading group. It's open to anyone ;) He will present his powerful method for Reinforcement Learning on real and sim robots. It's based on a World Model. So advances in this space like Prism-1 by Wayve can

Xuxin Cheng (@xuxin_cheng) 's Twitter Profile Photo

Introduce Open-π“πžπ₯πžπ•π’π¬π’π¨π§πŸ€–: ⁣ We need an intuitive and remote teleoperation interface to collect more robot data. π“πžπ₯πžπ•π’π¬π’π¨π§ lets you immersively operate a robot even if you are 3000 miles away, like in the movie 𝘈𝘷𝘒𝘡𝘒𝘳. Open-sourced!

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

Introducing Open-TeleVision: robot-tv.github.io with Fully Autonomous policy videoπŸ‘‡. We can conduct a long-horizon task with inserting 12 cans nonstop without any interruptions. We offer: πŸ€– Highly precise and smooth bimanual manipulation. πŸ“Ί Active egocentric vision (with

Runyu Ding (@dngxngxng3) 's Twitter Profile Photo

Introducing Bunny-VisionPro: Our system delivers immersive robot control with both visual and haptic feedback. Using VisionPro and low-cost finger cots with vibration motors, operators can control robots intuitively and immersively, similar to VR gaming. x.com/xiaolonw/statu…

Nicklas Hansen (@ncklashansen) 's Twitter Profile Photo

New work led by Ignat Georgiev! We show that TD-MPC2 world models can be used as differentiable simulators in a multi-task setting, and it even beats an *actual* differentiable simulator. Super fun collaboration and the results are really promising!

Xiaolong Wang (@xiaolonw) 's Twitter Profile Photo

Cannot believe this finally happened! Over the last 1.5 years, we have been developing a new LLM architecture, with linear complexity and expressive hidden states, for long-context modeling. The following plots show our model trained from Books scale better (from 125M to 1.3B)

Cannot believe this finally happened! Over the last 1.5 years, we have been developing a new LLM architecture, with linear complexity and expressive hidden states, for long-context modeling. The following plots show our model trained from Books scale better (from 125M to 1.3B)
Nicklas Hansen (@ncklashansen) 's Twitter Profile Photo

New fun benchmark for RL researchers to stress test their algorithms! We release code + human driver data for imitation, and benchmark TD-MPC2 in a multi-track setting πŸŽοΈπŸ’¨

Remi Cadene (@remicadene) 's Twitter Profile Photo

The third LeRobot research presentation on TDMPC 1 and 2 by Nicklas Hansen is available on youtube: youtu.be/5d9W0I2mpNg Don't miss Nicklas' latest projects on nicklashansen.com Project page: tdmpc2.com Diffusion Policy paper: huggingface.co/papers/2310.16…

The third <a href="/LeRobotHF/">LeRobot</a> research presentation on TDMPC 1 and 2 by <a href="/ncklashansen/">Nicklas Hansen</a> is available on youtube: youtu.be/5d9W0I2mpNg

Don't miss Nicklas' latest projects on nicklashansen.com
Project page: tdmpc2.com
Diffusion Policy paper: huggingface.co/papers/2310.16…
Nicklas Hansen (@ncklashansen) 's Twitter Profile Photo

TD-MPC2 just got ~5x faster! The update is available in a separate "speedups" branch at the moment and will be merged into main soon. Special thanks to vmoens for his help with this!

TD-MPC2 just got ~5x faster! The update is available in a separate "speedups" branch at the moment and will be merged into main soon. Special thanks to <a href="/VincentMoens/">vmoens</a> for his help with this!
NVIDIA Robotics (@nvidiarobotics) 's Twitter Profile Photo

See Nicklas Hansen's amazing work on robot perception, planning, and action in dynamic environments (NVIDIA Graduate Research Fellow and PhD candidate at UC San Diego). βœ¨πŸ€– #NationalRoboticsWeek Learn more about Nicklas's work. ➑️ nvda.ws/4lv85eb

See Nicklas Hansen's amazing work on robot perception, planning, and action in dynamic environments (NVIDIA Graduate Research Fellow and PhD candidate at <a href="/UCSanDiego/">UC San Diego</a>). βœ¨πŸ€– #NationalRoboticsWeek

Learn more about Nicklas's work. ➑️ nvda.ws/4lv85eb