Ademi Adeniji (@ademiadeniji) 's Twitter Profile
Ademi Adeniji

@ademiadeniji

PhD @UCBerkeley. Prev @NVIDIAAI, @Stanford. Reinforcement Learning, Robot Learning, Robot Foundation Models

ID: 1527628302973906944

linkhttps://ademiadeniji.github.io calendar_today20-05-2022 12:33:42

68 Tweet

493 Followers

218 Following

Siddhant Haldar (@haldar_siddhant) 's Twitter Profile Photo

The most frustrating part of imitation learning is collecting huge amounts of teleop data. But why teleop robots when robots can learn by watching us? Introducing Point Policy, a novel framework that enables robots to learn from human videos without any teleop, sim2real, or RL.

Irmak Guzey (@irmakkguzey) 's Twitter Profile Photo

Despite great advances in learning dexterity, hardware remains a major bottleneck. Most dexterous hands are either bulky, weak or expensive. I’m thrilled to present the RUKA Hand — a powerful, accessible research tool for dexterous manipulation that overcomes these limitations!

Ademi Adeniji (@ademiadeniji) 's Twitter Profile Photo

Closed-loop robot policies directly from human interactions. No teleop, no robot data co-training, no RL, and no sim. Just Aria smart glasses. Everyday human data is passively scalable and a massively underutilized resource in robotics...More to come here in the coming weeks.

Vincent Liu (@vincentjliu) 's Twitter Profile Photo

I think the most interesting insight from EgoZero is the tradeoff between 2D/3D representations in human-to-robot learning. 2D inputs (images, VLMs) scale and encode strong visual priors—but mapping these to 3D actions is hard and data-hungry, especially given the human-robot

I think the most interesting insight from EgoZero is the tradeoff between 2D/3D representations in human-to-robot learning.

2D inputs (images, VLMs) scale and encode strong visual priors—but mapping these to 3D actions is hard and data-hungry, especially given the human-robot
Younggyo Seo (@younggyoseo) 's Twitter Profile Photo

Excited to present FastTD3: a simple, fast, and capable off-policy RL algorithm for humanoid control -- with an open-source code to run your own humanoid RL experiments in no time! Thread below 🧵

Vincent Liu (@vincentjliu) 's Twitter Profile Photo

We just open-sourced EgoZero! It includes the full preprocessing to turn long-form recordings into individual demonstrations as 3D states + actions. We engineered this for scalability to big datasets (streaming, parallel workers, CPU/GPU utilization)

Ademi Adeniji (@ademiadeniji) 's Twitter Profile Photo

FeelTheForce is now open-source! 🤖🖐️ We’ve released the full codebase from our paper: 📡 Streaming infrastructure with docs 🧹 Preprocessing for multi-modal data 🎓 Training pipelines & commands 🧠 Inference code for force-sensitive policies 🛠️Code: github.com/feel-the-force…

Zhao-Heng Yin (@zhaohengyin) 's Twitter Profile Photo

Just open-sourced Geometric Retargeting (GeoRT) — the kinematic retargeting module behind DexterityGen. Includes tools for importing custom hands. Give it a try: github.com/facebookresear… A software by Berkeley AI Research and AI at Meta. More coming soon.

Raunaq Bhirangi (@raunaqmb) 's Twitter Profile Photo

Tactile sensing is gaining traction, but slowly. Why? Because integration remains difficult. But what if adding touch sensors to your robot was as easy as hitting “print”? Introducing eFlesh: a 3D-printable, customizable tactile sensor. Shape it. Size it. Print it. 🧶👇