Hao Su (@haosu_twitr) 's Twitter Profile
Hao Su

@haosu_twitr

Associate Professor @UCSanDiego. Computer Vision, Graphics, Embodied AI, Robotics. Co-Founder of hillbot.ai @hillbot_ai

ID: 1422688248950788101

linkhttps://cseweb.ucsd.edu/~haosu/ calendar_today03-08-2021 22:38:18

240 Tweet

6,6K Followers

374 Following

Hillbot (@hillbot_ai) 's Twitter Profile Photo

Hillbot is thrilled to be featured in NVIDIA’s article on #Sim2Real innovation. Our #generativeAI training is accelerating robot skilling & deployment. Thanks NVIDIA for spotlighting our work in #EmbodiedAI. Read more: blogs.nvidia.com/blog/robot-lea…

Hillbot (@hillbot_ai) 's Twitter Profile Photo

Going to #NeurIPS2024? Join us on Thursday, December 12th at Terminal City Club for a happy hour featuring Hillbot co-founder and CTO Hao Su! Register now on Luma! lu.ma/whjl0c2h

Rui Chen (@ray695_c) 's Twitter Profile Photo

🤖Robot needs touch! We're announcing ManiSkill-ViTac2025 Challenge! 🏆 -3 tracks: Tactile Manipulation, Vision-Tactile Fusion, Tactile Sensor Design. -High-fidelity sim for tactile & visual sensing. Compete without hardware! -$14,500 in prizes up for grabs!💰 #Robotics #ICRA

Hillbot (@hillbot_ai) 's Twitter Profile Photo

Looking to wind down after #NeurIPS2024? Come to Terminal City Club at 7:30 PM tonight for a happy hour featuring some of the brightest minds in robotics and AI. lu.ma/whjl0c2h #NeurIPS

Xuanlin Li (Simon) (@xuanlinli2) 's Twitter Profile Photo

Learning bimanual, contact-rich robot manipulation policies that generalize over diverse objects has long been a challenge. Excited to share our work: Planning-Guided Diffusion Policy Learning for Generalizable Contact-Rich Bimanual Manipulation! glide-manip.github.io 🧵1/n

Minghua Liu (@minghualiu_) 's Twitter Profile Photo

Excited to present MeshFormer at Oral Session 4D this afternoon! 🎉 Feel free to drop by our poster session (East #1309) later today, or join us at the Hillbot happy hour tonight. 🍺🦞🦪🥂🐔 Let’s catch up and chat about anything! lu.ma/whjl0c2h

Excited to present MeshFormer at Oral Session 4D this afternoon! 🎉

Feel free to drop by our poster session (East #1309) later today, or join us at the <a href="/Hillbot_AI/">Hillbot</a> happy hour tonight. 🍺🦞🦪🥂🐔

Let’s catch up and chat about anything! 

lu.ma/whjl0c2h
Arth Shukla @ ICLR 2025 (@arth_shukla) 's Twitter Profile Photo

📢 Introducing ManiSkill-HAB: A benchmark for low-level manipulation in home rearrangement tasks! - GPU-accelerated simulation - Extensive RL/IL baselines - Vision-based, whole-body control robot dataset All open-sourced: arth-shukla.github.io/mshab 🧵(1/5)

Stone Tao (@stone_tao) 's Twitter Profile Photo

🎉 Our new work tackling long horizon low-level manipulation in apartments is out! ~500GB of demonstration data in sim (you can generate more) and RL/IL baselines all provided. ManiSkill helped make this project scalable and faster to run via GPU sim compared to alternatives

Rui Chen (@ray695_c) 's Twitter Profile Photo

🎉 Happy New Year 2025! 🚀 Stage 1 of ManiSkill-ViTac Challenge 2025 is officially LIVE! 🦾 Compete in contact-rich challenging vision-tactile manipulation tasks for a $14,500 prize pool! 📅 Phase 1 DDL: Feb 15, 2025 🔗 Join now: github.com/cyliizyz/ManiS… #ICRA2025 #EmbodiedAI

Hillbot (@hillbot_ai) 's Twitter Profile Photo

Proud moment for Hillbot! We were featured in Jensen Huang's keynote presentation at #CES2025📷! Using NVIDIA Cosmos, Hillbot is pushing the boundaries of robotic training with terabytes of AI-generated high-fidelity 3D environments. Read More: hillbot.ai/blog/hillbot-n…

Proud moment for Hillbot! We were featured in Jensen Huang's keynote presentation at #CES2025📷!  

Using NVIDIA Cosmos, Hillbot is pushing the boundaries of robotic training with terabytes of AI-generated high-fidelity 3D environments.   

Read More: hillbot.ai/blog/hillbot-n…
Arth Shukla @ ICLR 2025 (@arth_shukla) 's Twitter Profile Photo

Excited to share that I’ll be joining UC San Diego for my PhD, advised by Professor Hao Su (Hao Su)! Many thanks to everyone who helped me along my research journey so far — I’m looking forward to continuing research in robot learning, manipulation, and simulation!

Isabella Liu (@isabella__liu) 's Twitter Profile Photo

🐅 Want to rig your favorite meme character? Try “RigAnything: Template-Free Autoregressive Rigging for Diverse 3D Assets”! ✨RigAnything is a transformer-based model that sequentially generates skeletons without predefined templates. It creates high-quality skeletons for

Adria Lopez (@alopeze99) 's Twitter Profile Photo

🤖Introducing DEMO3: our new model-based RL framework for multi-stage robotic manipulation from visual inputs and sparse rewards. 🧵🔽 📜 Paper: [arxiv.org/abs/2503.01837] 🌍 Project Page: [adrialopezescoriza.github.io/demo3/] 💻 Code: [github.com/adrialopezesco…]

🤖Introducing DEMO3: our new model-based RL framework for multi-stage robotic manipulation from visual inputs and sparse rewards. 🧵🔽

📜 Paper: [arxiv.org/abs/2503.01837]
🌍 Project Page: [adrialopezescoriza.github.io/demo3/]
💻 Code: [github.com/adrialopezesco…]
Stone Tao (@stone_tao) 's Twitter Profile Photo

some exciting news, ManiSkill/SAPIEN now has experimental support for MacOS w/ CPU simulation and rendering. You can now do your local debugging/development on Mac. Example shown here is a Push-T policy trained on my 4090 running on my mac! try now: maniskill.readthedocs.io/en/latest/user…

NVIDIA Robotics (@nvidiarobotics) 's Twitter Profile Photo

See Nicklas Hansen's amazing work on robot perception, planning, and action in dynamic environments (NVIDIA Graduate Research Fellow and PhD candidate at UC San Diego). ✨🤖 #NationalRoboticsWeek Learn more about Nicklas's work. ➡️ nvda.ws/4lv85eb

See Nicklas Hansen's amazing work on robot perception, planning, and action in dynamic environments (NVIDIA Graduate Research Fellow and PhD candidate at <a href="/UCSanDiego/">UC San Diego</a>). ✨🤖 #NationalRoboticsWeek

Learn more about Nicklas's work. ➡️ nvda.ws/4lv85eb
Sean Kirmani (@seankirmani) 's Twitter Profile Photo

🌎🌏🌍 We are organizing a workshop on Building Physically Plausible World Models at ICML Conference 2025! We have a great lineup of speakers, and are inviting you to submit your papers with a May 10 deadline. Website: physical-world-modeling.github.io

🌎🌏🌍 We are organizing a workshop on Building Physically Plausible World Models at <a href="/icmlconf/">ICML Conference</a> 2025!

We have a great lineup of speakers, and are inviting you to submit your papers with a May 10 deadline.

Website: physical-world-modeling.github.io
Tongzhou Mu 🤖 @ ICLR 2025 (@tongzhou_mu) 's Twitter Profile Photo

🤔 How to fine-tune an Imitation Learning policy (e.g., Diffusion Policy, ACT) with RL? As an RL practitioner, I’ve been struggling with this problem for a while. Here’s why it’s tough: 1️⃣ Special designs (usually for multimodal action distributions) in modern IL models make

Isabella Liu (@isabella__liu) 's Twitter Profile Photo

Excited to be at #ICLR2025 in person this year! Looking forward to reconnecting and making new friends.🤩 Come chat with us about Dynamic Gaussians Mesh at poster #97 tomorrow (4/26, 3–5:30pm). See you there!🥳 Website: liuisabella.com/DG-Mesh

Excited to be at #ICLR2025 in person this year! Looking forward to reconnecting and making new friends.🤩

Come chat with us about Dynamic Gaussians Mesh at poster #97 tomorrow (4/26, 3–5:30pm). See you there!🥳

Website: liuisabella.com/DG-Mesh
Bo Ai (@boai0110) 's Twitter Profile Photo

🧠 Can a single robot policy control many, even unseen, robot bodies? We scaled training to 1000+ embodiments and found: More training bodies → better generalization to unseen ones. We call it: Embodiment Scaling Laws. A new axis for scaling. 🔗 embodiment-scaling-laws.github.io 🧵👇