Justin Lidard (@justinlidard) 's Twitter Profile
Justin Lidard

@justinlidard

PhD student in robotics @Princeton with @Majumdar_Ani. Past intern at @ToyotaResearch.

ID: 1773110693073694720

linkhttp://jlidard.github.io calendar_today27-03-2024 22:11:59

4 Tweet

30 Followers

76 Following

Allen Z. Ren (@allenzren) 's Twitter Profile Photo

πŸ‘‡Introducing DPPO, Diffusion Policy Policy Optimization DPPO optimizes pre-trained Diffusion Policy using policy gradient from RL, showing π˜€π˜‚π—Ώπ—½π—Ώπ—Άπ˜€π—Άπ—»π—΄ π—Άπ—Ίπ—½π—Ώπ—Όπ˜ƒπ—²π—Ίπ—²π—»π˜π˜€ over a variety of baselines across benchmarks and sim2real transfer diffusion-ppo.github.io

Anirudha Majumdar (@majumdar_ani) 's Twitter Profile Photo

Accepted to #CoRL2024: our work on calibrated uncertainty quantification for perception systems for end-to-end statistical assurances for navigation! Perceive with Confidence: arxiv.org/abs/2403.08185 Work led by Anushri Dixit during her postdoc at Princeton University!

Anirudha Majumdar (@majumdar_ani) 's Twitter Profile Photo

Uncertainty quantification is a big missing piece of the puzzle in robotics. Very excited to be organizing a workshop at #RSS2025 with Lars Lindemann and Anastasios Nikolas Angelopoulos! We have a stellar cast of speakers and panelists from academia and industry to discuss the role and

Allen Z. Ren (@allenzren) 's Twitter Profile Photo

Attending #ICLR2025 next week! I will be presenting Diffusion Policy Policy Optimization (DPPO) at the Friday morning poster session with Lars Ankile diffusion-ppo.github.io I also joined Physical Intelligence lately. Love to chat about what we've been up to at Pi!

Haimin Hu (@haiminhu) 's Twitter Profile Photo

What a memorable way to wrap up my PhD: I had the privilege of presenting my work on safe human-centered robotics at the GRASP SFI seminar series Penn Engineering Huge thanks to the GRASP Laboratory for inviting me and to everyone who attended! youtube.com/watch?v=50ZId5…

Haimin Hu (@haiminhu) 's Twitter Profile Photo

Introducing Human-Centered Safety Filter (HCSF) to be presented at #RSS2025! Can robot co-pilots keep us safe while preserving human agency? πŸ’‘We propose an HCSF for complex shared autonomy like car racing. Project page: safe-racing.github.io Arxiv: lnkd.in/gtgDYNN5

C's Robotics Paper Notes (@roboreading) 's Twitter Profile Photo

arxiv.org/pdf/2505.00779 Uncertainty-aware Latent Safety Filters for Avoiding OOD Failures using world model's epistemic uncertainty as a proxy for potential ood hazards, calibrating with CP, then do reachability analysis with uncertainty and latent space

arxiv.org/pdf/2505.00779
Uncertainty-aware Latent Safety Filters for Avoiding OOD Failures

using world model's epistemic uncertainty as a proxy for potential ood hazards, calibrating with CP,  then do reachability analysis with uncertainty and latent space
Lihan Zha (@lihanzha) 's Twitter Profile Photo

Want your imitation learning policy to generalize better, but how to collect data to achieve this? πŸ€–πŸ€” Enter Factored Scaling Curves (FSC): a tool that quantifies how policy success scales with demos for each environmental factor, enabling principled data collection πŸ“ˆ . 🌐

Kensuke Nakamura (@kensukenk) 's Twitter Profile Photo

World models help scale safe control to extremely complex systems, but they introduce their own set of challenges. Check out Junwon’s uncertainty-aware latent safety filters that help us deal with hallucinating world models!

Anirudha Majumdar (@majumdar_ani) 's Twitter Profile Photo

Do reasoning language models π’Œπ’π’π’˜ π’˜π’‰π’†π’ π’•π’‰π’†π’š 𝒅𝒐𝒏'𝒕 π’Œπ’π’π’˜? Or does deeper reasoning lead to models becoming even more confidently wrong? πŸ€” Check out our new paper to find some answers! arxiv.org/abs/2506.18183