Antonio Loquercio (@antoniloq) 's Twitter Profile
Antonio Loquercio

@antoniloq

Assistant Professor at #UPenn. Mainly interested in #PhysicalAI

ID: 939109338387447808

linkhttps://antonilo.github.io/ calendar_today08-12-2017 12:28:08

411 Tweet

2,2K Followers

701 Following

Haozhi Qi (@haozhiq) 's Twitter Profile Photo

Videos are a promising data source for scaling manipulation; we just need to find the correct way to use them! Check out the new work on pretraining manipulation skills from hand-object trajectories parsed from online videos. The code is open-sourced and would be a great

Antonio Loquercio (@antoniloq) 's Twitter Profile Photo

Better. Faster. Stronger. Amazing work by Dingqi Daisy Zhang We design a model-informed domain randomization procedure and a BC-constrained RL approach that enables astonishing generalization. Works great up to 16X outside the training range.

Justin Kerr (@justkerrding) 's Twitter Profile Photo

Robot See, Robot Do allows you to teach a robot articulated manipulation with just your hands and a phone! RSRD imitates from 1) an object scan and 2) a human demonstration video, reconstructing 3D motion to plan a robot trajectory. robot-see-robot-do.github.io #CoRL2024 (Oral)

Penn Engineering (@pennengineers) 's Twitter Profile Photo

Welcome @Penn_ESE’s new Assistant Prof., Antonio Loquercio (@antonilog), who’s building robots that can sense & respond to human emotions. “For robots to be truly helpful, they must sense how a person is feeling and how they want—or don’t want—help.” bit.ly/48MNDiV

Welcome @Penn_ESE’s new Assistant Prof., Antonio Loquercio (@antonilog), who’s building robots that can sense & respond to human emotions. “For robots to be truly helpful, they must sense how a person is feeling and how they want—or don’t want—help.” bit.ly/48MNDiV
Amir Bar (@_amirbar) 's Twitter Profile Photo

Happy to share our new work on Navigation World Models! 🔥🔥 Navigation is a fundamental skill of agents with visual-motor capabilities. We train a single World Model across multiple environments and diverse agent data. w/ Gaoyue Zhou, Danny Tran, trevordarrell and Yann LeCun.

Ashish (@ashishkr9311) 's Twitter Profile Photo

We are making the Tesla Optimus walk increasingly more robust and ready for all terrains. Achieved by replacing a chunk of c++ code with nets!

Antonio Loquercio (@antoniloq) 's Twitter Profile Photo

I'm going to hire several PhD students this cycle. One on a specific topic (tinyurl.com/5yrpmeee), and others on blue-sky projects in robot learning. UPenn is a fantastic place to be (abundant robots and compute); come and look for yourself! Apply at tinyurl.com/3kxvbvvs

Andrei Bursuc (@abursuc) 's Twitter Profile Photo

Thrilled to announce our #CVPR2025 workshop on Embodied Intelligence for Autonomous Systems on the Horizon featuring a crazy line-up of speakers and challenges. Mark it in your agendas and also in your registration #cvpr2025

Thrilled to announce our <a href="/CVPR/">#CVPR2025</a> workshop on Embodied Intelligence for Autonomous Systems on the Horizon featuring a crazy line-up of speakers and challenges.
Mark it in your agendas and also in your registration #cvpr2025
Nikolai Matni (@nikolaimatni) 's Twitter Profile Photo

Tired of the aggressive mediocrity of robust control (RC) and the unreliability of certainty equivalent (CE) control?! Then try domain randomization (DR)! We prove that DR-based control of an unknown linear system is nearly as efficient as CE control, and nearly as reliable as RC

Tired of the aggressive mediocrity of robust control (RC) and the unreliability of certainty equivalent (CE) control?! Then try domain randomization (DR)! We prove that DR-based control of an unknown linear system is nearly as efficient as CE control, and nearly as reliable as RC
Antonio Loquercio (@antoniloq) 's Twitter Profile Photo

Last day of classes for our course (antonilo.github.io/real_world_rob…) at Penn Electrical and Systems Engineering, co-taught with Dinesh Jayaraman. Congratulations to Zac Ravichandran and Ignacio Hounie for best project, and Lee Milburn for best paper presentation! Teaching this class has been a fantastic experience!

Last day of classes for our course (antonilo.github.io/real_world_rob…) at <a href="/ESEatPenn/">Penn Electrical and Systems Engineering</a>, co-taught with <a href="/dineshjayaraman/">Dinesh Jayaraman</a>. Congratulations to <a href="/ZacRavichandran/">Zac Ravichandran</a> and Ignacio Hounie for best project, and Lee Milburn for best paper presentation! Teaching this class has been a fantastic experience!
Antonio Loquercio (@antoniloq) 's Twitter Profile Photo

We're continuing to push forward the idea of multi-sensory 3D simulators. I believe they will be very useful for AR/VR and (most importantly for me) robotics. If you want to know the key ideas behind it, come to our #CVPR2025 poster today (06/13) at 10:30 (#151).

Yixuan Wang (@yxwangbot) 's Twitter Profile Photo

🤖 Does VLA models really listen to language instructions? Maybe not 👀 🚀 Introducing our RSS paper: CodeDiffuser -- using VLM-generated code to bridge the gap between **high-level language** and **low-level visuomotor policy** 🎮 Try the live demo: robopil.github.io/code-diffuser/ (1/9)