Anirudha Majumdar (@majumdar_ani) 's Twitter Profile
Anirudha Majumdar

@majumdar_ani

Associate Professor in Robotics @Princeton. 20% Research Scientist @GoogleDeepMind in Princeton.

ID: 1238890699111751680

linkhttp://irom-lab.princeton.edu/ calendar_today14-03-2020 18:12:27

431 Tweet

5,5K Followers

543 Following

Anirudha Majumdar (@majumdar_ani) 's Twitter Profile Photo

Very excited to be co-organizing the #RSS2025 workshop on Out-of-Distribution Generalization in Robotics! We have a wonderful list of speakers and panelists -- looking forward to a stimulating set of talks and discussions.

Allen Z. Ren (@allenzren) 's Twitter Profile Photo

Data really matters for policy generalizing to diverse environment conditions as seen in pi05 etc. In this work we find strong correlation b/w # data and test performance, enabling targeted data collection. We also find offline metric (policy embedding) to give strong signal!

Pedro Milcent (@milcentpedro) 's Twitter Profile Photo

Great new paper by Lihan Zha, Apurva Badithela, Michael Zhang, Justin Lidard, Anirudha Majumdar, Allen Z. Ren, Dhruv Shah, and team. As we scale data collection for robotics, it must be done intelligently by choosing which data variations to focus resources on. This paper provides a

Neil deGrasse Tyson (@neiltyson) 's Twitter Profile Photo

The 80-year recipe stoking America’s Prosperity: 1) Free-market capital investments in brilliant ideas… 2) enabled by Engineering Innovations… 3) based on Science Research in Universities… 4) funded by Grants from the US Government. It’s time more people understood this.

The 80-year recipe stoking America’s Prosperity:

1) Free-market capital investments in brilliant ideas…
2) enabled by Engineering Innovations…
3) based on Science Research in Universities…
4) funded by Grants from the US Government.

It’s time more people understood this.
Anirudha Majumdar (@majumdar_ani) 's Twitter Profile Photo

How can we boost the visual generalization of VLAs 𝑤𝑖𝑡ℎ𝑜𝑢𝑡 𝑎𝑛𝑦 𝑓𝑖𝑛𝑒-𝑡𝑢𝑛𝑖𝑛𝑔? Asher Hancock will be presenting at #ICRA2025 on Wed in the Vision-Language-Action Models session (WeDT21): Run-Time Observation Interventions Make Vision-Language-Action Models More

Fei Xia (@xf1280) 's Twitter Profile Photo

Excited that our work on Gemini Robotics and Gemini spatial understanding have just been featured on #GoogleIO stage! I believe that a frontier model possessing strong real-world understanding capabilities represents the ultimate path to embodied AGI, and we are making rapid

Excited that our work on Gemini Robotics and Gemini spatial understanding have just been featured on #GoogleIO stage! I believe that a frontier model possessing strong real-world understanding capabilities represents the ultimate path to embodied AGI, and we are making rapid
Princeton University (@princeton) 's Twitter Profile Photo

"We lead the world in so many ways, including in scientific innovation and economic dynamism. Our great universities are the envy of the world and a crucial national asset." — Federal Reserve chair and Princeton Politics graduate Jerome Powell '75 at #Princeton25 Baccalaureate service.

Anirudha Majumdar (@majumdar_ani) 's Twitter Profile Photo

Foundation models can allow autonomous driving agents to perform commonsense reasoning, but they are also slow to run online. Our new paper distills the reasoning process from a VLM into an end-to-end driving model for fast inference: verdi-driving.github.io

Eric Jang (@ericjang11) 's Twitter Profile Photo

Revoking visas of Chinese students studying in critical fields like AI and Robotics is incredibly short-sighted and harmful to America’s long term prosperity. We want the best from every country to work for team America

Anirudha Majumdar (@majumdar_ani) 's Twitter Profile Photo

Can we train robots to 𝒑𝒍𝒂𝒏 𝒘𝒉𝒆𝒓𝒆 𝒕𝒐 𝒍𝒐𝒐𝒌 to find an object described with language? Check out our new paper that combines prior knowledge from a VLM with a world modeling approach for planning: 🌐 WoMAP: World Models For Embodied Open-Vocabulary Object

Lihan Zha (@lihanzha) 's Twitter Profile Photo

Robots struggle to find objects like humans. Why? Understanding "behind the box" (semantics) isn't enough – they need to plan precise, efficient actions to get there. Key Insight: VLMs propose where to look ("Maybe behind the box?"). World models evaluate VLM proposals and

Lars Lindemann (@larslindemann2) 's Twitter Profile Photo

Attending RSS at USC next week? We are organizing a workshop on "Statistical Uncertainty Quantification in the Era of AI-Enabled Robots" and have a fantastic lineup of speakers from academia and industry: sites.google.com/view/rss2025-w…. Looking forward to seeing many of you 😎

Attending RSS at USC next week? We are organizing a workshop on "Statistical Uncertainty Quantification in the Era of AI-Enabled Robots" and have a fantastic lineup of speakers from academia and industry: sites.google.com/view/rss2025-w…. Looking forward to seeing many of you 😎
Lihan Zha (@lihanzha) 's Twitter Profile Photo

Join us at two workshops #RSS2025 on 6/21! 📍 Resource Constrained Robotics (RTH109) 🗣️ Oral talk: 11:00–11:15 📍 Continual Robot Learning from Humans (OHE132) 🖼️ Spotlight poster: 10:30–11:00 Come by and chat—we’re excited to share our work!

Join us at two workshops #RSS2025 on 6/21!
📍 Resource Constrained Robotics (RTH109)
🗣️ Oral talk: 11:00–11:15

📍 Continual Robot Learning from Humans (OHE132)
🖼️ Spotlight poster: 10:30–11:00

Come by and chat—we’re excited to share our work!
Google DeepMind (@googledeepmind) 's Twitter Profile Photo

We’re bringing powerful AI directly onto robots with Gemini Robotics On-Device. 🤖 It’s our first vision-language-action model to help make robots faster, highly efficient, and adaptable to new tasks and environments - without needing a constant internet connection. 🧵

Lucas Wang@CoRL (@wlt5678) 's Twitter Profile Photo

Gemini Robotics zero shot picks a dextrous hand: No prior demos, not even videos. It recognized, failed to grasp (slippery surface), retried with new angles, got help, nailed the pick, adjusted post-pick. Mad respect to DeepMind team. Now I really worry about human labor 😅