Venkatesh (@venkyp2000) 's Twitter Profile
Venkatesh

@venkyp2000

Robotics Grad Researcher @CILVRatNYU @nyuniversity ; Alum @IITIOfficial

ID: 838733070404468736

linkhttp://venkyp.com calendar_today06-03-2017 12:48:41

29 Tweet

103 Followers

139 Following

Siddhant Haldar (@haldar_siddhant) 's Twitter Profile Photo

The most frustrating part of imitation learning is collecting huge amounts of teleop data. But why teleop robots when robots can learn by watching us? Introducing Point Policy, a novel framework that enables robots to learn from human videos without any teleop, sim2real, or RL.

Irmak Guzey (@irmakkguzey) 's Twitter Profile Photo

Despite great advances in learning dexterity, hardware remains a major bottleneck. Most dexterous hands are either bulky, weak or expensive. I’m thrilled to present the RUKA Hand — a powerful, accessible research tool for dexterous manipulation that overcomes these limitations!

Anya Zorin (@anyazorin) 's Twitter Profile Photo

Super excited to present our open-source robot hand RUKA! I had a lot of fun working on this with Irmak Guzey and all our amazing collaborators: Billy Yan, Aadhithya, Lisa Kondrich, Nikhil Bhattasali, and Lerrel Pinto. Check out our website at ruka-hand.github.io

Mahi Shafiullah 🏠🤖 (@notmahi) 's Twitter Profile Photo

Morning, #ICRA2025 IEEE ICRA! Bring something small 🍋🍑 and have our Robot Utility Model pick it up at our EXPO demo today from 1-5 PM, between hall A2/A3! Talk and poster is right before, 11:15-12:15 in room 411. Also, DM if you want to chat 🤖 for the messy, real world!

Akash Sharma (@akashshrm02) 's Twitter Profile Photo

Robots need touch for human-like hands to reach the goal of general manipulation. However, approaches today don’t use tactile sensing or use specific architectures per tactile task. Can 1 model improve many tactile tasks? 🌟Introducing Sparsh-skin: tinyurl.com/y935wz5c 1/6

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

Imagine robots learning new skills—without any robot data. Today, we're excited to release EgoZero: our first steps in training robot policies that operate in unseen environments, solely from data collected through humans wearing Aria smart glasses. 🧵👇

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

Teaching robots to learn only from RGB human videos is hard! In Feel The Force (FTF), we teach robots to mimic the tactile feedback humans experience when handling objects. This allows for delicate, touch-sensitive tasks—like picking up a raw egg without breaking it. 🧵👇

Ademi Adeniji (@ademiadeniji) 's Twitter Profile Photo

Everyday human data is robotics’ answer to internet-scale tokens. But how can robots learn to feel—just from videos?📹 Introducing FeelTheForce (FTF): force-sensitive manipulation policies learned from natural human interactions🖐️🤖 👉 feel-the-force-ftf.github.io 1/n

Raunaq Bhirangi (@raunaqmb) 's Twitter Profile Photo

Tactile sensing is gaining traction, but slowly. Why? Because integration remains difficult. But what if adding touch sensors to your robot was as easy as hitting “print”? Introducing eFlesh: a 3D-printable, customizable tactile sensor. Shape it. Size it. Print it. 🧶👇

Lerrel Pinto (@lerrelpinto) 's Twitter Profile Photo

We have developed a new tactile sensor, called e-Flesh, with a simple working principle: measure deformations in 3D printable microstructures. Now all you need to make tactile sensors is a 3D printer, magnets, and magnetometers! 🧵

Minyoung Hwang (@robominyoung) 's Twitter Profile Photo

Interested in how generative AI can be used for human-robot interaction? We’re organizing the 2nd Workshop on Generative AI for Human-Robot Interaction (GenAI-HRI) at #RSS2025 in LA — bringing together the world's leading experts in the field. The workshop is happening on Wed,

Interested in how generative AI can be used for human-robot interaction?

We’re organizing the 2nd Workshop on Generative AI for Human-Robot Interaction (GenAI-HRI) at #RSS2025 in LA — bringing together the world's leading experts in the field. The workshop is happening on Wed,
Raunaq Bhirangi (@raunaqmb) 's Twitter Profile Photo

Generalization needs data. But data collection is hard for precise tasks like plugging USBs, swiping cards, inserting plugs, and keying locks. Introducing robust, precise VisuoTactile Local (ViTaL) policies: >90% success rates from just 30 demos and 45 min of real-world RL.🧶⬇️

Siddhant Haldar (@haldar_siddhant) 's Twitter Profile Photo

Current robot policies often face a tradeoff: they're either precise (but brittle) or generalizable (but imprecise). We present ViTaL, a framework that lets robots generalize precise, contact-rich manipulation skills across unseen environments with millimeter-level precision. 🧵

Zifan Zhao (@zifan_zhao_2718) 's Twitter Profile Photo

🚀 With minimal data and a straightforward training setup, our VisualTactile Local Policy (ViTaL) fuses egocentric vision + tactile feedback to achieve millimeter-level precision & zero-shot generalization! 🤖✨ Details ▶️ vitalprecise.github.io