Sudharshan Suresh (@suddhus) 's Twitter Profile
Sudharshan Suresh

@suddhus

Research Scientist @BostonDynamics Atlas. Teaching humanoids a few tricks. Prev: @AIatMeta and PhD at @CMU_Robotics.

ID: 33920014

linkhttp://suddhu.github.io calendar_today21-04-2009 14:31:26

261 Tweet

900 Followers

1,1K Following

Boston Dynamics (@bostondynamics) 's Twitter Profile Photo

Atlas is demonstrating reinforcement learning policies developed using a motion capture suit. This demonstration was developed in partnership with Boston Dynamics and RAI Institute.

Xiaoyang Wu (@xiaoyangwu_) 's Twitter Profile Photo

📢Sonata: Self-Supervised Learning of Reliable Point Representations📢 Meet Sonata, our"3D-DINO" pre-trained with Point Transformer V3, accepted at #CVPR2025! 🌍: xywu.me/sonata 📦: github.com/facebookresear… 🚀: github.com/Pointcept/Poin… 🔹Semantic-aware and spatial

📢Sonata: Self-Supervised Learning of Reliable Point Representations📢

Meet Sonata, our"3D-DINO" pre-trained with Point Transformer V3, accepted at #CVPR2025!

🌍: xywu.me/sonata
📦: github.com/facebookresear…
🚀: github.com/Pointcept/Poin…

🔹Semantic-aware and spatial
Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

So excited for this!!! The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. Learning from self-supervised data collection is all you need for training the humanoid hand control you see below.

Franziska Meier (@_kainoa_) 's Twitter Profile Photo

New work from the Robotics team at AI at Meta . Want to be able to tell your robot bring you the keys from the table in the living room? Try out Locate 3D! interactive demo: locate3d.atmeta.com/demo model & code & dataset: github.com/facebookresear…

Sudharshan Suresh (@suddhus) 's Twitter Profile Photo

Check out the awesome follow-up to sparsh by Akash Sharma and others - self-supervised learning for a variety of downstream tasks with tactile skins!

Mandi Zhao (@zhaomandi) 's Twitter Profile Photo

How to learn dexterous manipulation for any robot hand from a single human demonstration? Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.

Generalist (@generalistai_) 's Twitter Profile Photo

Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early

Siddhant Haldar (@haldar_siddhant) 's Twitter Profile Photo

Current robot policies often face a tradeoff: they're either precise (but brittle) or generalizable (but imprecise). We present ViTaL, a framework that lets robots generalize precise, contact-rich manipulation skills across unseen environments with millimeter-level precision. 🧵

Russ Tedrake (@russtedrake) 's Twitter Profile Photo

TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/ One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the

Sudharshan Suresh (@suddhus) 's Twitter Profile Photo

Lucas and co. wrote a great blogpost on the careful science and engineering behind language-conditioned policies for whole-body manipulation! There's a lot more work on the horizon; our team is hiring researchers to scale egocentric human data and VLMs for robotics. Reach out!

Lawrence Zhu (@lawrencezhu22) 's Twitter Profile Photo

Can we scale up mobile manipulation with egocentric human data? Meet EMMA: Egocentric Mobile MAnipulation EMMA learns from human mobile manipulation + static robot data — no mobile teleop needed! EMMA generalizes to new scenes and scales strongly with added human data. 1/9

Lucas Manuelli (@lucas_manuelli) 's Twitter Profile Photo

I'll be giving a talk @CoRL2025 about developing large behavior models on Atlas. Come by the dexterous manipulation workshop at 11:30 to see the talk, dex-manipulation.github.io/corl2025/

Kevin Zakka (@kevin_zakka) 's Twitter Profile Photo

I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.

Ted Xiao (@xiao_ted) 's Twitter Profile Photo

Same for ego data, UMI data, etc. An open secret is “we used ego4d” actually means filtering out the 1% of videos that are vaguely useful for learning. Occlusions, suboptimality, sensor noise, and so many pitfalls! Modeling + collect co-design is only gold standard currently.