Yusen Luo (@yusen_2001) 's Twitter Profile
Yusen Luo

@yusen_2001

USC MSCS | RA at Lira Lab | Interested in Robot Learning & Large Foundation Model

ID: 1690600878011731968

calendar_today13-08-2023 05:47:28

5 Tweet

20 Followers

45 Following

Karl Pertsch (@karlpertsch) 's Twitter Profile Photo

Excited to release FAST, our new robot action tokenizer! 🤖 Some highlights: - Simple autoregressive VLAs match diffusion VLA performance - Trains up to 5x faster - Works on all robot datasets we tested - First VLAs that work out-of-the-box in new environments! 🧵/

Sumedh Sontakke (@sota_kke) 's Twitter Profile Photo

Reward learning (like DYNA) has enabled e2e policies to reach 99% SR but (1) generalization to new tasks and (2) sample efficiency are still hard! ReWiND produces better rewards for OOD tasks than SOTA like GVL & LIV from Jason Ma that inspired us! 🌐: rewind-reward.github.io

Yusen Luo (@yusen_2001) 's Twitter Profile Photo

Absolutely thrilled to be part of this work— I truly enjoyed every moment of the collaboration. Huge thanks to all the amazing collaborators who made it happen! 😄

C's Robotics Paper Notes (@roboreading) 's Twitter Profile Photo

rewind-reward.github.io ReWiND: Language-Guided Rewards Teach Robot Policies without New Demonstrations LLM generated instructions z + demo -> learning-based reward model (progress) R(o,z) -> optimize policy via RL online

rewind-reward.github.io

ReWiND: Language-Guided Rewards Teach Robot Policies without New Demonstrations

LLM generated instructions z + demo -> learning-based reward model (progress) R(o,z) -> optimize policy via RL online
Jesse Zhang (@jesse_y_zhang) 's Twitter Profile Photo

How can non-experts quickly teach robots a variety of tasks? Introducing HAND ✋, a simple, time-efficient method of training robots! Using just a **single hand demo**, HAND learns manipulation tasks in under **4 minutes**! 🧵