Prithwish Dan (@prithwish_dan) 's Twitter Profile
Prithwish Dan

@prithwish_dan

sc: prithwish_dan

ID: 727991533845254145

calendar_today04-05-2016 22:41:21

223 Tweet

102 Followers

150 Following

Guilderland Schools (@guilderlandcsd) 's Twitter Profile Photo

UPDATED: Due to inclement weather, GCSD will be closed on Monday, Dec. 2. District offices open. Stay safe and warm! โ„๏ธ

GuilderlandAthletics (@godutchathletix) 's Twitter Profile Photo

Congratulations to graduating senior Sheridan Dillon who will be attending Fairfield University to play baseball! Sheridan is planning on majoring in Business! Good luck Sheridan!

Congratulations to graduating senior Sheridan Dillon who will be attending Fairfield University to play baseball! Sheridan is planning on majoring in Business! Good luck Sheridan!
GuilderlandAthletics (@godutchathletix) 's Twitter Profile Photo

Congratulations to graduating senior Sean O'Brien who will be attending University of Buffalo to run cross country and track! Sean is planning on majoring in Engineering! Good luck Sean!

Congratulations to graduating senior Sean O'Brien who will be attending University of Buffalo to run cross country and track! Sean is planning on majoring in Engineering! Good luck Sean!
Sanjiban Choudhury (@sanjibac) 's Twitter Profile Photo

How can we enable LLMs to actively clarify ambiguous task specifications by gathering information from humans? Check out APRICOT at #CoRL2024! APRICOT combines LLMs, which propose diverse questions, with Bayesian Active Learning, which selects the most informative one to ask.

Sanjiban Choudhury (@sanjibac) 's Twitter Profile Photo

๐Ÿš€ Come checkout the *four* papers at #CoRL2024 from Cornell PoRTaL Group ! Each explores a unique angle on how robots can learn effectively with humans: - Learning by asking (APRICOT) - Learning by watching (RHyME, Time Your Rewards) - Learning to collaborate (MOSAIC) Details: ๐Ÿงต๐Ÿ‘‡

Kushal (@kushalk_) 's Twitter Profile Photo

We would love to train our robots with human videos. But humans move very differently from robots! How do we bridge this divide? Check out our work at #ICRA2025 โ€œOne-Shot Imitation under Mismatched Executionโ€ on specifying tasks to robots via a prompt human video ๐Ÿงต

Kushal (@kushalk_) 's Twitter Profile Photo

Weโ€™re incredibly proud to see our work featured by the Cornell Chronicle! This work was co-led with the amazing Prithwish Dan and advised by Sanjiban Choudhury Read the full story here - news.cornell.edu/stories/2025/0โ€ฆ

Sanjiban Choudhury (@sanjibac) 's Twitter Profile Photo

A core challenge we face when training robots from human videos (e.g., MotionTrack) is that the human basically has to move like a robot for it to work... Kushal and Prithwish Dan have gone deep on this over the past yearโ€”first of many exciting papers that will solve this!

Gokul Swamy (@g_k_swamy) 's Twitter Profile Photo

Say ahoy to ๐š‚๐™ฐ๐™ธ๐™ป๐™พ๐šโ›ต: a new paradigm of *learning to search* from demonstrations, enabling test-time reasoning about how to recover from mistakes w/o any additional human feedback! ๐š‚๐™ฐ๐™ธ๐™ป๐™พ๐š โ›ต out-performs Diffusion Policies trained via behavioral cloning on 5-10x data!