Shuran Song (@songshuran) 's Twitter Profile
Shuran Song

@songshuran

Assistant Professor @Stanford University working on #Robotics #AI #ComputerVision

ID: 751414836240609280

linkhttp://shurans.github.io calendar_today08-07-2016 13:57:12

275 Tweet

10,10K Followers

486 Following

Priya Sundaresan (@priyasun_) 's Twitter Profile Photo

How can we move beyond static-arm lab setups and learn robot policies in our messy homes? We introduce HoMeR, an imitation learning agent for in-the-wild mobile manipulation. ๐Ÿงต1/8

Kuan Fang (@kuanfang) 's Twitter Profile Photo

Excited to share our new work ReLIC, a framework for versatile loco-manipulation through flexible interlimb coordination. We combine reinforcement learning and model-based control to let robots dynamically assign limbs ๐Ÿฆพ๐Ÿฆฟ for manipulation or locomotion based on task demands.

Generalist (@generalistai_) 's Twitter Profile Photo

Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early

Shuran Song (@songshuran) 's Twitter Profile Photo

Letโ€™s say we do have a powerful robot model that learns rich behaviors from large-scale robot data. What now? To make them actually useful, we need to be able to steer and control their behavior so that they match user intent, preferences, or deployment needs. How can we do

Shuran Song (@songshuran) 's Twitter Profile Photo

Teleoperating a robot feels unnatural โ€” not just because of limited arm or hand DoFs, but also because of the lack of perceptual freedom! Humans naturally move their head and torso to search, track, and focus โ€” far beyond a simple 2-DoF camera. How to get there? Check out

Xiaomeng Xu (@xiaomengxu11) 's Twitter Profile Photo

I'll present RoboPanoptes at #RSS2025 tomorrow 6/22 ๐Ÿ Spotlight talk: 9:00-10:30am (Bovard Auditorium) Poster: 12:30-2:00pm, poster #31 (Associates Park)

Hanzi Mao (@hanna_mao) 's Twitter Profile Photo

We build Cosmos-Predict2 as a world foundation model for Physical AI builders โ€” fully open and adaptable. Post-train it for specialized tasks or different output types. Available in multiple sizes, resolutions, and frame rates. ๐Ÿ“ท Watch the repo walkthrough

Stanford AI Lab (@stanfordailab) 's Twitter Profile Photo

Robot learning has largely focused on standard platformsโ€”but can it embrace robots of all shapes and sizes? In Xiaomeng Xu's latest blog post, we show how data-driven methods bringย unconventionalย robots to life, enabling capabilities that traditional designs and control can't

Robot learning has largely focused on standard platformsโ€”but can it embrace robots of all shapes and sizes? In <a href="/XiaomengXu11/">Xiaomeng Xu</a>'s latest blog post, we show how data-driven methods bringย unconventionalย robots to life, enabling capabilities that traditional designs and control can't
Liyuan Zhu (@liyuan_zz) 's Twitter Profile Photo

Point maps have become a powerful representation for image-based 3D reconstruction. What if we could push point maps even further tackle 3D registration and assembly? Introducing Rectified Point Flow (RPF), a generic formulation for point cloud pose estimation.

Russ Tedrake (@russtedrake) 's Twitter Profile Photo

TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/ One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the

Binghao Huang (@binghao_huang) 's Twitter Profile Photo

Tactile interaction in the wild can unlock fine-grained manipulation! ๐ŸŒฟ๐Ÿค–โœ‹ We built a portable handheld tactile gripper that enables large-scale visuo-tactile data collection in real-world settings. By pretraining on this data, we bridge vision and touchโ€”allowing robots to: