Yifeng Zhu (@yifengzhu_ut) 's Twitter Profile
Yifeng Zhu

@yifengzhu_ut

ID: 914298512031891457

calendar_today01-10-2017 01:18:46

23 Tweet

2,2K Followers

662 Following

Ruohan Zhang (@ruohanzhang76) 's Twitter Profile Photo

If you're interested in the topics in this thread, or in attention in and around neuroscience, psychology, human-computer interaction, and machine learning more broadly, be sure to submit to and attend our NeurIPS workshop! AllThingsAttention

Yuxiang Yang (@yxyang1995) 's Twitter Profile Photo

Want to deploy your legged robots in the wild, but not sure how to handle complex terrain conditions? Check-out our #CoRL2022 paper "Learning Semantics-Aware Locomotion Skills from Human Demonstrations", where the robot learns to choose its speed and gait from camera image. (1/3)

Yuke Zhu (@yukez) 's Twitter Profile Photo

Roomba builds a static map of your home by moving around. Can a robot create articulated models of indoor scenes through its physical interaction? Ditto in the House builds digital twins of articulated objects in everyday environments. #ICRA2023 Website: ut-austin-rpl.github.io/HouseDitto/

Cheng Chi (@chichengcc) 's Twitter Profile Photo

What if the form of visuomotor policy has been the bottleneck for robotic manipulation all along? Diffusion Policy achieves 46.9% improvement vs prior StoA on 11 tasks from 4 benchmarks + 4 real world tasks! (1/7) website : diffusion-policy.cs.columbia.edu paper: arxiv.org/abs/2303.04137

Harshit Sikchi (@harshit_sikchi) 's Twitter Profile Photo

Our work on a unified approach to imitation learning and learning from preferences was featured on the Microsoft Research Blog! Try this method out (github.com/hari-sikchi/ra…) to obtain SOTA results in imitation learning. A collab with Akanksha Saran, W. Goo, Scott Niekum

Xiaohan Zhang (@xiaohanzhang220) 's Twitter Profile Photo

S3O: Symbolic State Space Optimization tl;dr: Solving Task and Motion Planning problems without predefining task-level state space in mobile manipulation domains. w/ Yifeng Zhu Yan Ding Yuqian Jiang Yuke Zhu Peter Stone Shiqi Zhang Check it out at @IROS2023 next week!

Arthur King Zhang (@arthurkzhang) 's Twitter Profile Photo

Context-aware navigation requires object/terrain-level understanding - in support of this, we are thrilled to release CODa, the largest multi-class, multi-modal urban robotics dataset to date. amrl.cs.utexas.edu/coda/ Work done with a fantastic team from UT Austin's Autonomous Mobile Robotics Laboratory! A short 🧡, 1/7

Haresh Karnan (@karnanharesh) 's Twitter Profile Photo

Introducing STERLING πŸ’«: "Self-Supervised Terrain Representation Learning from Unconstrained Robot Experience" πŸ’« to be presented at CoRL 2023 in Atlanta! πŸ˜€ Project Page 🌎: hareshkarnan.github.io/sterling/ Paper πŸ—žοΈ: openreview.net/pdf?id=VLihM67… A thread 🧡

Learning Agents Research Group (@utlarg) 's Twitter Profile Photo

Huge congratulations to Peter Stone for receiving the ACM Allen Newell Award! πŸŽ‰ We are proud to celebrate this incredible recognition of his groundbreaking contributions to AI and computer science.πŸ‘ #ACM #AAAI #AllenNewellAward

Zhanyi S (@s_zhanyi) 's Twitter Profile Photo

How to prevent behavior cloning policies from drifting OOD on long horizon manipulation tasks? Check out Latent Policy Barrier (LPB), a plug-and-play test-time optimization method that keeps BC policies in-distribution with no extra demo or fine-tuning: project-latentpolicybarrier.github.io

Rutav (@rutavms) 's Twitter Profile Photo

Intelligent humanoids should have the ability to quickly adapt to new tasks by observing humans Why is such adaptability important? 🌍 Real-world diversity is hard to fully capture in advance 🧠 Adaptability is central to natural intelligence We present MimicDroid πŸ‘‡ 🌐

Zichao (@zichaohu99) 's Twitter Profile Photo

How can robots follow complex instructions in dynamic environments? πŸ€– Meet ComposableNav β€” a diffusion-based planner that enables robots to generate novel navigation behaviors that satisfy diverse instruction specifications on the fly β€” no retraining needed. πŸ“„ Just accepted

Kevin Lin (@linkevin0) 's Twitter Profile Photo

Tired of collecting robot demos? πŸš€ Introducing CP-Gen: geometry-aware data generation for robot learning. From a single demo, CP-Gen generates thousands of new demonstrations to train visuomotor policies that transfer zero-shot sim-to-real across novel geometries and poses.

Yifeng Zhu (@yifengzhu_ut) 's Twitter Profile Photo

Please check out our efforts of test-time training of robot manipulation policies, where the robot improves with more demonstrations but does not require fine-tuning of policies.

Yifeng Zhu (@yifengzhu_ut) 's Twitter Profile Photo

We hope that RoboSSM will inspire the robot learning community to rethink about suitable neural architectures for miltitask manipulation policies. RoboSSM is led by Youngju Yoo, who will be applying for PhD. She is a very capable candidate, please consider!

Ruoshi Liu (@ruoshi_liu) 's Twitter Profile Photo

Everyone says they want general-purpose robots. We actually mean it β€” and we’ll make it weird, creative, and fun along the way 😎 Recruiting PhD students to work on Computer Vision and Robotics UMD Department of Computer Science for Fall 2026 in the beautiful city of Washington DC!

Everyone says they want general-purpose robots.

We actually mean it β€” and we’ll make it weird, creative, and fun along the way 😎

Recruiting PhD students to work on Computer Vision and Robotics <a href="/umdcs/">UMD Department of Computer Science</a> for Fall 2026 in the beautiful city of Washington DC!