Pranay Mathur (@pranay_mathur17) 's Twitter Profile
Pranay Mathur

@pranay_mathur17

MLE Perception@Cobot | Prev. at @MathWorks @GoogleOSS @arlteam | Alumni @GeorgiaTech @bitspilaniindia

ID: 1736517024237694976

linkhttps://matnay.github.io calendar_today17-12-2023 22:41:50

18 Tweet

30 Followers

81 Following

Zhenjun Zhao (@zhenjun_zhao) 's Twitter Profile Photo

Neural Visibility Field for Uncertainty-Driven Active Mapping Shangjie Xue, Jesse Dill, Pranay Mathur, Frank Dellaert, Panagiotis Tsiotra, Danfei Xu tl;dr: visibility->uncertainty estimation;distribution of a color along a ray->GMM->entropy->next best view arxiv.org/pdf/2406.06948

Neural Visibility Field for Uncertainty-Driven Active Mapping

<a href="/ShangjieXue/">Shangjie Xue</a>, Jesse Dill, Pranay Mathur, <a href="/fdellaert/">Frank Dellaert</a>, Panagiotis Tsiotra, <a href="/danfei_xu/">Danfei Xu</a>

tl;dr: visibility-&gt;uncertainty estimation;distribution of a color along a ray-&gt;GMM-&gt;entropy-&gt;next best view

arxiv.org/pdf/2406.06948
Shangjie Xue (@shangjiexue) 's Twitter Profile Photo

How can robots efficiently explore and map unknown environments? 🤖📷 Introducing Neural Visibility Field (NVF), a principled framework to quantify uncertainty in NeRF for next-best-view planning. #CVPR2024 1/6 🌐 arxiv.org/abs/2406.06948 👇 Thread

Danfei Xu (@danfei_xu) 's Twitter Profile Photo

Active perception with NeRF! It’s quite rare to see a work that is both principled and empirically effective. Neural Visibility Field (NVF) led by Shangjie Xue is a delightful exception. NVF unifies both visibility and appearance uncertainty in a Bayes net framework and achieved

Utkarsh Mishra (@utkarshm0410) 's Twitter Profile Photo

How can robots compositionally generalize over multi-object multi-robot tasks for long-horizon planning? At #CoRL2024, we introduce Generative Factor Chaining (GFC), a diffusion-based approach that composes spatial-temporal factors into long-horizon skill plans. (1/7)

Simar Kareer (@simar_kareer) 's Twitter Profile Photo

Introducing EgoMimic - just wear a pair of Project Aria Project Aria @Meta smart glasses 👓 to scale up your imitation learning datasets! Check out what our robot can do. A thread below👇

Dhruv Patel (@dhruvpatel2012) 's Twitter Profile Photo

Presenting EgoMimic at #CoRL2024! 🎉 Effortless data collection with Project Aria @Meta glasses—just wear & go. Our low-cost manipulator leverages this scalable data to perform grocery handling, laundry, coffee-making & more. Thrilled to be a part of this effort! egomimic.github.io

Project Aria @Meta (@meta_aria) 's Twitter Profile Photo

The Georgia Tech School of Interactive Computing Georgia Tech is presenting EgoMimic at #Corl2024 . 🎉 Using egocentric data from Aria glasses, the team trained their robot to seamlessly perform everyday tasks. EgoMimic is entirely open source, from robot hardware, the dataset, and learning algorithms. Check out

Danfei Xu (@danfei_xu) 's Twitter Profile Photo

We started this moonshot project a year ago. Now we are excited to share our progress on robot learning from egocentric human data 🕶️🤲 Key idea: Egocentric human data is robot data in disguise. By bridging the kinematic, visual, and distributional gap, we can directly leverage

Danfei Xu (@danfei_xu) 's Twitter Profile Photo

Is teleoperation + BC our ultimate path to productizing Robot Learning? Well ... Talk is cheap, show me your data & policy! We are thrilled to organize a teleoperation and imitation learning challenge at #ICRA2025, with a total prize pool of $200,000 (cash + robots)! General

Is teleoperation + BC our ultimate path to productizing Robot Learning?

Well ... Talk is cheap, show me your data &amp; policy!

We are thrilled to organize a teleoperation and imitation learning challenge at #ICRA2025, with a total prize pool of $200,000 (cash + robots)!

General
Danfei Xu (@danfei_xu) 's Twitter Profile Photo

Thrilled to share this story covering our collaboration with Project Aria @Meta Reality Labs at Meta ! Human data is robot data in disguise. Imitation learning is human modeling. We are at the beginning of something truly revolutionary, both for robotics and human-level AI beyond language.

Ted Xiao (@xiao_ted) 's Twitter Profile Photo

Meta is a dark horse in the AI + Robotics race. The R&D from Reality Labs and FAIR on world understanding (Aria, SAM, basically the whole Oculus stack) are an early general robot, just without a body. Congrats to the EgoMimic team, and looking forward to future work from Meta!

The Humanoid Hub (@thehumanoidhub) 's Twitter Profile Photo

Georgia Tech's research uses Meta’s Project Aria glasses to train humanoid robots with egocentric data. PhD student Simar Kareer developed an algorithm that leverages human data to enhance robot learning, achieving a 400% performance boost with just 90 minutes of recordings.

Snehal Jauhri (@snehaljauhri) 's Twitter Profile Photo

Excited to announce EgoAct🥽🤖: the 1st Workshop on Egocentric Perception & Action for Robot Learning @ #RSS2025 in LA! We’re bringing together researchers exploring how egocentric perception can drive next-gen robot learning! Full info: egoact.github.io/rss2025 Robotics: Science and Systems

Excited to announce EgoAct🥽🤖: the 1st Workshop on Egocentric Perception &amp; Action for Robot Learning @ #RSS2025 in LA!

We’re bringing together researchers exploring how egocentric perception can drive next-gen robot learning!

Full info: egoact.github.io/rss2025

<a href="/RoboticsSciSys/">Robotics: Science and Systems</a>
Georgia Tech Computing (@gtcomputing) 's Twitter Profile Photo

The robots 🤖 are coming...to Atlanta. The College's faculty and students are contributing to the latest in robotics research at the field's largest and most cutting-edge conference, #ICRA2025, May 19-23. Discover all of Georgia Tech's experts at IEEE ICRA, with

The robots 🤖 are coming...to Atlanta. 

The College's faculty and students are contributing to the latest in robotics research at the field's largest and most cutting-edge conference, #ICRA2025, May 19-23. 

Discover all of <a href="/GeorgiaTech/">Georgia Tech</a>'s experts at <a href="/ieee_ras_icra/">IEEE ICRA</a>, with
Danfei Xu (@danfei_xu) 's Twitter Profile Photo

Looking forward to officially present EgoMimic at ICRA25! The entire team will be there. Reach out and chat about cool new idea on human data for robot learning!

Danfei Xu (@danfei_xu) 's Twitter Profile Photo

Honored to receive the NSF CAREER Award from the Foundational Research in Robotics (FRR) program! Deep gratitude to my Georgia Tech School of Interactive Computing Georgia Tech Computing colleagues and the robotics community for their unwavering support. Grateful of U.S. National Science Foundation for continuing to fund the future of robotics

Honored to receive the NSF CAREER Award from the Foundational Research in Robotics (FRR) program!

Deep gratitude to my <a href="/ICatGT/">Georgia Tech School of Interactive Computing</a> <a href="/gtcomputing/">Georgia Tech Computing</a> colleagues and the robotics community for their unwavering support. Grateful of <a href="/NSF/">U.S. National Science Foundation</a> for continuing to fund the future of robotics
Alexander Soare (@asoare159) 's Twitter Profile Photo

At Cobot, we've been trying Real-Time Action Chunking from Physical Intelligence, which keeps robot rollouts smooth even with slow inference. Our blog shares a tweak for even slicker control. Check it out: alexander-soare.github.io/robotics/2025/…