Logan Engstrom (@logan_engstrom) 's Twitter Profile
Logan Engstrom

@logan_engstrom

CS PhD student @ MIT

ID: 1864965229

linkhttp://loganengstrom.com calendar_today14-09-2013 20:44:36

316 Tweet

1,1K Followers

331 Following

Aleksander Madry (@aleks_madry) 's Twitter Profile Photo

How is an LLM actually using the info given to it in its context? Is it misinterpreting anything or making things up? Introducing ContextCite: a simple method for attributing LLM responses back to the context: gradientscience.org/contextcite w/ Ben Cohen-Wang, Harshay Shah, Kristian Georgiev

Sarah Cen (@cen_sarah) 's Twitter Profile Photo

In work w/ Andrew Ilyas Jennifer Allen Hannah Li Aleksander Madry we give experimental evidence that users strategize on recommender systems! We find that users react to their (beliefs about) *algorithms* (not just content!) to shape future recs. Paper: arxiv.org/abs/2405.05596 1/8

In work w/ <a href="/andrew_ilyas/">Andrew Ilyas</a> <a href="/_JenAllen/">Jennifer Allen</a> <a href="/hannahq_li/">Hannah Li</a>  <a href="/aleks_madry/">Aleksander Madry</a> we give experimental evidence that users strategize on recommender systems!

We find that users react to their (beliefs about) *algorithms* (not just content!) to shape future recs.

Paper: arxiv.org/abs/2405.05596

1/8
Aleksander Madry (@aleks_madry) 's Twitter Profile Photo

At #ICML2024 ? Our tutorial "Data Attribution at Scale" will be to tomorrow at 9:30 AM CEST in Hall A1! I will not be able to make it (but will arrive later that day), but my awesome students Andrew Ilyas Sam Park Logan Engstrom will carry the torch :)

Logan Engstrom (@logan_engstrom) 's Twitter Profile Photo

Stop by our poster on model-aware dataset selection at ICML! Location/time: 1:30pm Hall C 4-9 #1010 (Tuesday) Paper: arxiv.org/abs/2401.12926 with: Axel Feldmann Aleksander Madry

Andrew Ilyas (@andrew_ilyas) 's Twitter Profile Photo

Thanks to all who attended our tutorial "Data Attribution at Scale" at ICML (w/ Sam Park Logan Engstrom Kristian Georgiev Aleksander Madry)! We're really excited to see the response to this emerging topic. Slides, notes, ICML video: ml-data-tutorial.org Public recording soon!

Thanks to all who attended our tutorial "Data Attribution at Scale" at ICML (w/ <a href="/smsampark/">Sam Park</a> <a href="/logan_engstrom/">Logan Engstrom</a> <a href="/kris_georgiev1/">Kristian Georgiev</a> <a href="/aleks_madry/">Aleksander Madry</a>)! We're really excited to see the response to this emerging topic.

Slides, notes, ICML video: ml-data-tutorial.org
Public recording soon!
Andrew Ilyas (@andrew_ilyas) 's Twitter Profile Photo

The ATTRIB workshop is back @ NeurIPS 2024! We welcome papers connecting model behavior to data, algorithms, parameters, scale, or anything else. Submit by Sep 18! More info: attrib-workshop.cc Co-organizers: Tolga Bolukbasi Logan Engstrom Sadhika Malladi Elisa Nguyen Sam Park

The ATTRIB workshop is back @ NeurIPS 2024! We welcome papers connecting model behavior to data, algorithms, parameters, scale, or anything else. Submit by Sep 18!

More info: attrib-workshop.cc

Co-organizers: <a href="/tolgab0/">Tolga Bolukbasi</a> <a href="/logan_engstrom/">Logan Engstrom</a> <a href="/SadhikaMalladi/">Sadhika Malladi</a> <a href="/_elinguyen/">Elisa Nguyen</a> <a href="/smsampark/">Sam Park</a>
Logan Engstrom (@logan_engstrom) 's Twitter Profile Photo

Announcing a deadline extension for the ATTRIB workshop! Submissions are now due September 25th, with an option to submit October 4th if at least one paper author volunteers to be an emergency reviewer. More info here: attrib-workshop.cc

Andrew Ilyas (@andrew_ilyas) 's Twitter Profile Photo

After some very fun years at MIT, I'm really excited to be joining CMU as an assistant professor in Jan 2026! A big (huge!) thanks to my advisors (Aleksander Madry Constantinos Daskalakis), collaborators, mentors & friends. In the meantime, I'll be a Stein Fellow at Stanford Statistics.

Gautam Kamath (@thegautamkamath) 's Twitter Profile Photo

Very cool work from Logan and the gang! One of these problems, indiscriminate data poisoning, has been one of my favorite mysteries in robustness -- and they did an order of magnitude better than we could previously! Looking forward to checking it out in more detail.

Andrew Ilyas (@andrew_ilyas) 's Twitter Profile Photo

Had a great time at Simons Institute for the Theory of Computing talking about new & upcoming work on meta-optimization of ML training tl;dr we show how to compute gradients *through* the training process & use them to optimize training. Immediate big gains on data selection, poisoning, attribution & more!

Andrew Ilyas (@andrew_ilyas) 's Twitter Profile Photo

โ€œHow will my model behave if I change the training data?โ€ Recent(-ish) work w/ Logan Engstrom: we nearly *perfectly* predict ML model behavior as a function of training data, saturating benchmarks for this problem (called โ€œdata attributionโ€).

โ€œHow will my model behave if I change the training data?โ€

Recent(-ish) work w/ <a href="/logan_engstrom/">Logan Engstrom</a>: we nearly *perfectly* predict ML model behavior as a function of training data, saturating benchmarks for this problem (called โ€œdata attributionโ€).