Zipeng Fu (@zipengfu) 's Twitter Profile
Zipeng Fu

@zipengfu

Stanford AI & Robotics PhD @StanfordAILab | Creator of Mobile ALOHA, HumanPlus, Robot Parkour | Past: Google DeepMind, CMU, UCLA

ID: 2353774941

linkhttp://zipengfu.github.io calendar_today20-02-2014 03:40:52

351 Tweet

13,13K Followers

1,1K Following

Qingqing Zhao (@qingqing_zhao_) 's Twitter Profile Photo

PhysAvatar has been accepted into #ECCV2024! 🎉 We create virtual avatars with realistic garment dynamics and lighting, with a physics simulator in the loop.

Salesforce AI Research (@sfresearch) 's Twitter Profile Photo

Zipeng Fu's 'HumanPlus' at Stanford —a 6ft humanoid tying shoelaces—exemplifies the fusion of embodied AI and advanced ML. It's been making us think: how do we seamlessly integrate symbolic reasoning with physical dexterity in autonomous systems? #AIRobotics

Zipeng Fu (@zipengfu) 's Twitter Profile Photo

Introduce Mobility VLA - Google's foundation model for navigation - started as my intern project: - Gemini 1.5 Pro for high-level image & text understanding - topological graphs for low-level navigation - supports multimodal instructions co-lead Zhuo Xu, Lewis Chiang, Jie Tan

Tony Z. Zhao (@tonyzzhao) 's Twitter Profile Photo

Excited to announce Surgical Robot Transformer🪡: Automating delicate surgical tasks with end-to-end imitation learning. It's still mind blowing to see robot autonomously tying knots, a task I did not think is possible even a few months before. Sharing some learnings in 🧵:

Remi Cadene (@remicadene) 's Twitter Profile Photo

Do not miss tomorrow LeRobot tech talk! 🤓 Zipeng Fu from @stanford will explain how to automate any tasks with Humanoid robots such as Unitree G1 Join us on August 21st 5:30PM-6:30PM CEST meet.google.com/jcb-kscd-ijk

Wenlong Huang (@wenlong_huang) 's Twitter Profile Photo

What structural task representation enables multi-stage, in-the-wild, bimanual, reactive manipulation? Introducing ReKep: LVM to label keypoints & VLM to write keypoint-based constraints, solve w/ optimization for diverse tasks, w/o task-specific training or env models. 🧵👇

Philipp Wu (@philippswu) 's Twitter Profile Photo

Really nice article from @ericjang about actuators in robotics. Great to see 1X place such an emphasis on design. Safety through compliance and backdrivability were some of the core themes behind our BLUE project back in the day (berkeleyopenrobotics.github.io). Some thoughts 👇

Really nice article from @ericjang about actuators in robotics. Great to see <a href="/1x_tech/">1X</a> place such an emphasis on design. Safety through compliance and backdrivability were some of the core themes behind our BLUE project back in the day (berkeleyopenrobotics.github.io). Some thoughts 👇
xin duan (@xinduan926) 's Twitter Profile Photo

We can easily see a trained dog expertly chasing after a fast-moving frisbee and leaping up to catch it just before it hits the ground. Now, can robot join the fun? Introduce Playful DoggyBot🐶: Learning Agile and Precise Quadrupedal Locomotion 1/3

Zhou Xian (@zhou_xian_) 's Twitter Profile Photo

Everything you love about generative models — now powered by real physics! Announcing the Genesis project — after a 24-month large-scale research collaboration involving over 20 research labs — a generative physics engine able to generate 4D dynamical worlds powered by a physics

Qingqing Zhao (@qingqing_zhao_) 's Twitter Profile Photo

Introduce CoT-VLA – Visual Chain-of-Thought reasoning for Robot Foundation Models! 🤖 By leveraging next-frame prediction as visual chain-of-thought reasoning, CoT-VLA uses future prediction to guide action generation and unlock large-scale video data for training. #CVPR2025