Mycal Tucker (@mycaltucker) 's Twitter Profile
Mycal Tucker

@mycaltucker

ID: 2391943519

calendar_today16-03-2014 02:23:45

129 Tweet

170 Followers

133 Following

Noga Zaslavsky (@nogazaslavsky) 's Twitter Profile Photo

The deadline for our #NeurIPS2022 InfoCog workshop is approaching soon [updated 🗓️: Sep 22]. We expect to have some funding to support a few selected presenters of accepted papers, and a special issue of Open Mind associated with the workshop! More info 👇 sites.google.com/view/infocog-n…

NeurIPS Conference (@neuripsconf) 's Twitter Profile Photo

#NeurIPS2022 authors of the accepted papers at both the main track and D&B track, please log in to the author console and answer the “in-person presentation” question asap in order for us to plan accordingly.

Mycal Tucker (@mycaltucker) 's Twitter Profile Photo

Super grateful for this chance to continue exciting *interdisciplinary* research. Thanks to my advisor, Julie Shah, but also so many collaborators from other departments (Roger Levy and Noga Zaslavsky) and inspiring labmates and researchers.

Felix Wang (@felixwyw) 's Twitter Profile Photo

How to guarantee successful imitation of multi-step tasks despite arbitrary perturbations? 1-2 demos + a logic formula of task specification. See our #CoRL2022 oral talk today at 4:30p! Paper: yanweiw.github.io/tli (with Nadia Figueroa, PhD @shenli_robotics Ankit Shah Julie Shah)

MITRobotics (@mitrobotics) 's Twitter Profile Photo

** New ICLR Spotlight Paper 2023 ** Excited to announce our work building inherently interpretable Deep RL agents that doesn't sacrifice performance, and calibrates appropriate user trust. (details below...) openreview.net/forum?id=hWwY_… By Eoin Kenny, Mycal Tucker, and Julie Shah

Andi Peng (@theandipenguin) 's Twitter Profile Photo

Humans use abstractions for data-efficient learning. We wish for neural networks to do the same. In our proposed human-in-the-loop framework, we automatically generate a spectrum of abstractions and allow users to deploy task-appropriate ones. To appear at #NeurIPS2023! [1/n]

Humans use abstractions for data-efficient learning. We wish for neural networks to do the same. 

In our proposed human-in-the-loop framework, we automatically generate a spectrum of abstractions and allow users to deploy task-appropriate ones. 

To appear at #NeurIPS2023! [1/n]