Levi Lelis (@levilelis) 's Twitter Profile
Levi Lelis

@levilelis

Artificial Intelligence Researcher - Assistant Professor - University of Alberta - Canada CIFAR AI Chair (he/him, ele/dele).

ID: 60764724

linkhttp://www.cs.ualberta.ca/~santanad calendar_today28-07-2009 00:57:58

504 Tweet

640 Followers

505 Following

Matthew Macfarlane (@mattvmacfarlane) 's Twitter Profile Photo

Some great work from Amirhossein Rajabpour and Levi Lelis on neural policies vs programmatic policies for OOD generalization. I'm looking forward to discussing such topics further at the Workshop on Programmatic Representations for Agent Learning ICML Conference, which Levi is co-organising.

Levi Lelis (@levilelis) 's Twitter Profile Photo

Conferences should welcome refutations and critiques in their main track. Unfortunately, they often don't. We had a critique accepted once to a leading conference, but all three reviewers recommended rejection—thank you, AC! A special track for this type of work is a good start.

Jake Tuero (@jaketuero) 's Twitter Profile Photo

🛬 Vancouver for #icml, I’ll be presenting our work on Subgoal Guided Heuristic Search with Learned Subgoals on Tuesday from 4:30-7:00pm. Come stop by and say hello 👋

🛬 Vancouver for #icml, I’ll be presenting our work on Subgoal Guided Heuristic Search with Learned Subgoals on Tuesday from 4:30-7:00pm. Come stop by and say hello 👋
Levi Lelis (@levilelis) 's Twitter Profile Photo

Catch Jake at #ICML2025! He’s presenting “Subgoal-Guided Heuristic Search with Learned Subgoals” in Poster Session 2 West — tomorrow, 4:30–7:00 pm. arxiv.org/abs/2506.07255

Levi Lelis (@levilelis) 's Twitter Profile Photo

Sparsity can also be used to partially explain some of the successes of programmatic representations, such as FlashFill. DSLs and the way we search over the space of programs naturally give us sparse representations, which favor sample efficiency and OOD generalization.

Ndea (@ndea) 's Twitter Profile Photo

Are programmatic policies really better at generalizing OOD than neural policies, or are the benchmarks biased? This position paper revisits 4 prior studies and finds neural policies can match programmatic ones - if you adjust training (sparse observation, reward shaping, etc.)

Are programmatic policies really better at generalizing OOD than neural policies, or are the benchmarks biased?

This position paper revisits 4 prior studies and finds neural policies can match programmatic ones - if you adjust training (sparse observation, reward shaping, etc.)
Shao-Hua Sun (@shaohua0116) 's Twitter Profile Photo

Kicking off #RLC2025 with our Workshop on Programmatic Reinforcement Learning! This workshop explores how programmatic representations can improve interpretability, generalization, efficiency, and safety in RL.

Kicking off #RLC2025 with our Workshop on Programmatic Reinforcement Learning! This workshop explores how programmatic representations can improve interpretability, generalization, efficiency, and safety in RL.
Levi Lelis (@levilelis) 's Twitter Profile Photo

Rina’s work has inspired me since my early days as a PhD student. I’m so happy to see her receive this very well-deserved award. Congratulations, Rina!

Richard Sutton (@richardssutton) 's Twitter Profile Photo

My acceptance speech at the Turing award ceremony: Good evening ladies and gentlemen. The main idea of reinforcement learning is that a machine might discover what to do on its own, without being told, from its own experience, by trial and error. As far as I know, the first

Alona Fyshe (she/her) (@alonamarie) 's Twitter Profile Photo

I am hiring a post doc at University of Alberta , affiliated with Amii ! We study language processing in the brain using LLMs and neuroimaging. Looking for someone with experience with ideally both neuroimaging and LLMs, or a willingness to learn. Email me Qs apps.ualberta.ca/careers/postin…

Matthew Macfarlane (@mattvmacfarlane) 's Twitter Profile Photo

Happy to share that Searching Latent Program Spaces has been accepted as a Spotlight at #NeurIPS2025 ✨ It's been a pleasure to work with Clem Bonnet on this! See you all in San Diego 🌴 👋, arxiv.org/pdf/2411.08706

Happy to share that Searching Latent Program Spaces has been accepted as a Spotlight at #NeurIPS2025 ✨ 

It's been a pleasure to work with <a href="/ClementBonnet16/">Clem Bonnet</a> on this!

See you all in San Diego 🌴 👋, 

arxiv.org/pdf/2411.08706
Cohere Labs (@cohere_labs) 's Twitter Profile Photo

Join our Reinforcement Learning Group next week on Monday, September 29th for a session with Esraa Elelimy on "Deep Reinforcement Learning with Gradient Eligibility Traces." Thanks to Rahul for organizing this event ✨ Learn more: cohere.com/events/cohere-…

Join our Reinforcement Learning Group next week on Monday, September 29th for a session with Esraa  Elelimy on "Deep Reinforcement Learning with Gradient Eligibility Traces."

Thanks to <a href="/rahul_narava/">Rahul</a> for organizing this event ✨

Learn more: cohere.com/events/cohere-…