Gaoyue Zhou (@gaoyuezhou) 's Twitter Profile
Gaoyue Zhou

@gaoyuezhou

CS PhD @nyuniversity | Prev: @CMU_Robotics , @berkeley_ai | Research Intern @Meta

ID: 1359409161650335746

linkhttps://gaoyuezhou.github.io calendar_today10-02-2021 07:49:41

38 Tweet

804 Followers

175 Following

Nikhil Bhattasali (@nxbhattasali) 's Twitter Profile Photo

Can neuro-inspired ANN architectures be useful for motor control in quadruped robots? We translate neural circuits in the limbs and spinal cord of mammals into an ANN architecture controlling quadruped locomotion. w/ Venkatesh, Lerrel Pinto, @neurograce

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

Wouldn't it be great if we could train robots without any teleoperation! In our latest paper, we train robots to mimic a human video of the task by simply matching the object features using RL. We only need one video and under an hour of robot training.

Mahi Shafiullah 🏠🤖 (@notmahi) 's Twitter Profile Photo

If you want robots that can just live with you & help 24/7, it needs to build & update its memory on the fly. Current semantic memory representations like VoxelMap from OK-Robot can't change with the world. That's why we built DynaMem: dynamic memory for a changing, open world!

Unnat Jain (@unnatjain2010) 's Twitter Profile Photo

Excited to share that I'll be joining University of California at Irvine as a CS faculty in '25!🌟 Faculty apps: Krishna Murthy, Zhuang Liu & I share our tips: unnat.github.io/notes/Hidden_C… PhD apps: I'm looking for students in vision, robot learning, & AI4Science. Details👇

Excited to share that I'll be joining University of California at Irvine as a CS faculty in '25!🌟

Faculty apps: <a href="/_krishna_murthy/">Krishna Murthy</a>, <a href="/liuzhuang1234/">Zhuang Liu</a> &amp; I share our tips: unnat.github.io/notes/Hidden_C…

PhD apps: I'm looking for students in vision, robot learning, &amp; AI4Science. Details👇
Gaoyue Zhou (@gaoyuezhou) 's Twitter Profile Photo

Thrilled to share our work on Navigation World Models! It’s exciting to see how world models empower planning for navigation and shed light on agents with reasoning abilities. Such a fun collaboration with everyone! arXiv: arxiv.org/abs/2412.03572 Website: amirbar.net/nwm

Siddhant Haldar (@haldar_siddhant) 's Twitter Profile Photo

The most frustrating part of imitation learning is collecting huge amounts of teleop data. But why teleop robots when robots can learn by watching us? Introducing Point Policy, a novel framework that enables robots to learn from human videos without any teleop, sim2real, or RL.

Gokul Swamy (@g_k_swamy) 's Twitter Profile Photo

1.5 yrs ago, we set out to answer a seemingly simple question: what are we *actually* getting out of RL in fine-tuning? I'm thrilled to share a pearl we found on the deepest dive of my PhD: the value of RL in RLHF seems to come from *generation-verification gaps*. Get ready to🤿!

1.5 yrs ago, we set out to answer a seemingly simple question: what are we *actually* getting out of RL in fine-tuning? I'm thrilled to share a pearl we found on the deepest dive of my PhD: the value of RL in RLHF seems to come from *generation-verification gaps*. Get ready to🤿!
Irmak Guzey (@irmakkguzey) 's Twitter Profile Photo

Despite great advances in learning dexterity, hardware remains a major bottleneck. Most dexterous hands are either bulky, weak or expensive. I’m thrilled to present the RUKA Hand — a powerful, accessible research tool for dexterous manipulation that overcomes these limitations!

Homanga Bharadhwaj (@mangahomanga) 's Twitter Profile Photo

Presenting DemoDiffusion: An extremely simple approach enabling a pre-trained 'generalist' diffusion policy to follow a human-demonstration for a novel task during inference One-shot human imitation *without* requiring any paired human-robot data or online RL 🙂 1/n

Yutong Bai (@yutongbai1002) 's Twitter Profile Photo

What would a World Model look like if we start from a real embodied agent acting in the real world? It has to have: 1) A real, physically grounded and complex action space—not just abstract control signals. 2) Diverse, real-life scenarios and activities. Or in short: It has to

Raunaq Bhirangi (@raunaqmb) 's Twitter Profile Photo

Generalization needs data. But data collection is hard for precise tasks like plugging USBs, swiping cards, inserting plugs, and keying locks. Introducing robust, precise VisuoTactile Local (ViTaL) policies: >90% success rates from just 30 demos and 45 min of real-world RL.🧶⬇️