RoboPapers (@robopapers) 's Twitter Profile
RoboPapers

@robopapers

@chris_j_paxton & @micoolcho geeking out weekly with authors of robotics AI papers. On YouTube / X / Spotify

ID: 1888753998674665472

linkhttps://www.youtube.com/@RoboPapers calendar_today10-02-2025 00:57:10

23 Tweet

851 Followers

2 Following

RoboPapers (@robopapers) 's Twitter Profile Photo

Full episode dropping soon! Geeking out with Haoyu Xiong on Vision in Action - Learning Active Perception from Human Demonstrations vision-in-action.github.io Co-hosted by Michael Cho - Rbt/Acc Chris Paxton

RoboPapers (@robopapers) 's Twitter Profile Photo

Most robots are fixed in one location, with cameras at the correct location to solve whatever their task is going to be. This makes setting up the camera in the correct location a key part of task setup; it also makes the task unnecessarily difficult. Ideally, robots would move

RoboPapers (@robopapers) 's Twitter Profile Photo

We’ve all seen videos of humanoid robots performing single tasks that are very impressive, like dancing or karate. But training humanoid robots to perform a wide range of complex motions is difficult. GMT is a general-purpose policy which can learn a wide range of robot motions.

RoboPapers (@robopapers) 's Twitter Profile Photo

Full episode dropping soon! Geeking out with Pranav Atreya Karl Pertsch on RoboArena: Distributed Real-World Evaluation of Generalist Robot Policies robo-arena.github.io Co-hosted by Michael Cho - Rbt/Acc Chris Paxton

RoboPapers (@robopapers) 's Twitter Profile Photo

Full episode dropping soon! Geeking out with Pranav Atreya Karl Pertsch on RoboArena: Distributed Real-World Evaluation of Generalist Robot Policies robo-arena.github.io Co-hosted by Michael Cho - Rbt/Acc Chris Paxton

RoboPapers (@robopapers) 's Twitter Profile Photo

Learning policies via imitation is extremely potent, but making sure those policies will generalize to out of distribution settings is still very hard. SAILOR proposes a solution in learning to search via a learned world model, which outperforms existing imitation approaches.

RoboPapers (@robopapers) 's Twitter Profile Photo

Today, most robot learning from demonstration predicts action chunks, small robot action trajectory. Doing this is crucial for better performance, and has all kinds of advantages. But how can we apply these advantages to reinforcement learning? We talked to Paul Zhou and

RoboPapers (@robopapers) 's Twitter Profile Photo

Full episode dropping soon! Geeking out with Jeremy Collins on AMPLIFY: Actionless Motion Priors for Robot Learning from Videos amplify-robotics.github.io Co-hosted by Michael Cho - Rbt/Acc Chris Paxton

RoboPapers (@robopapers) 's Twitter Profile Photo

Learning a true world model for a human body means taking high-dimensional actions representing the full body pose — the location of hands and feet, for example — and using it to predict the effects of each action. This would allow for an unprecedented level of simulation over

RoboPapers (@robopapers) 's Twitter Profile Photo

Full episode dropping soon! Geeking out with Jeremy Collins on AMPLIFY: Actionless Motion Priors for Robot Learning from Videos amplify-robotics.github.io Co-hosted by Michael Cho - Rbt/Acc Chris Paxton