
Ben Eysenbach
@ben_eysenbach
Prof @ Princeton CS working on AI/ML/RL.
🦋@ ben-eysenbach.bsky.social
ID: 1369723453150949376
https://ben-eysenbach.github.io/ 10-03-2021 18:55:03
126 Tweet
4,4K Followers
0 Following


I'm super excited to share a new offline RL benchmark for goal-reaching tasks, led by Seohong Park and Kevin Frans ! Lots of different datasets, excellent code, and well-tuned baseline implementations! Check out the list of research opportunities at the end.

It's been great working with Kyle Hatch on a new method that significantly extends the long-horizon reasoning capabilities of diffusion policies using generative video models!


I'm excited to share new work with Jens Tuyls and Chongyi Zheng on understanding skill learning algorithms! Turns out that mutual information _is_ enough, as long as you use an appropriate architecture and drop the "anti-exploratory" term from the reward.


Excited to share new work led by Vivek Myers and @cathy_ji_writes that proves you can learn to reach distant goals by solely training on nearby goals. The key idea is a new form of invariance. This invariance implies generalization w.r.t. the horizon.



Check out Michał Bortkiewicz's poster this afternoon at #ICLR2025 !

Do huge amounts of data give (offline) RL algorithms the capacity to perform long-horizon reasoning? A: No. Today's algorithms are bottlenecked by the task horizon, not dataset size. Seohong Park 's new paper gives an algorithm that addresses horizon to boost performance.

Honored to be recognized with the Alfred Rheinstein Faculty Award! Thanks to my students+collaborators for making it possible! Congrats to Ellen Zhong as well!

What makes RL hard is the _time_ axis⏳, so let's pre-train RL policies to learn about _time_! Same intuition as successor representations 🧠, but made scalable with modern GenAI models 🚀. Excited to share new work led by Chongyi Zheng, together with Seohong Park and Sergey Levine!