
Xavier Puig @ ICLR
@xavierpuigf
Research Scientist at FAIR @AIatMeta working on EmbodiedAI | PhD @MIT_CSAIL
ID: 383903725
http://xavierpuigf.com 02-10-2011 18:32:10
280 Tweet
1,1K Followers
883 Following






๐ชHow do you train robots to move furniture? This requires robots to synchronize whole-body movements, making teleoperation or RL approaches challenging. Check out this amazing work by Tianyu Li EasyPaperSniper, using human demonstrations to train robots to move furniture in the real world!



How do we enable agents to perform tasks even when these are underspecified? In this work, led by Ram Ramrakhya, we train VLA agents via RL to decide when to act in the environment or ask clarifying questions, enabling them to handle ambiguous instructions ram81.github.io/projects/ask-tโฆ





๐ Excited to introduce SimWorld: an embodied simulator for infinite photorealistic world generation ๐๏ธ populated with diverse agents ๐ค If you are at #CVPR2025, come check out the live demo ๐ Jun 14, 12:00-1:00 pm at JHU booth, ExHall B Jun 15, 10:30 am-12:30 pm, #7, ExHall B

๐ค Does VLA models really listen to language instructions? Maybe not ๐ ๐ Introducing our RSS paper: CodeDiffuser -- using VLM-generated code to bridge the gap between **high-level language** and **low-level visuomotor policy** ๐ฎ Try the live demo: robopil.github.io/code-diffuser/ (1/9)

Check out our workshop on Continual Robot Learning from Humans, at #RSS2025, with amazing speakers covering topics including learning from human visual demonstrations, generative models for continual robot learning or the role of LLMs in embodied contexts โฆ-robot-learning-from-humans.github.io