
Michal Moshkovitz
@ml_theorist
Building interpretable and explainable ML models. @ Bosch Center for AI. Previous: postdoc @UCSD, TAU
ID: 1303086214686281728
https://sites.google.com/view/michal-moshkovitz 07-09-2020 21:43:17
760 Tweet
986 Followers
466 Following

Tomorrow, Tuesday, at the Theory of Interpretability seminar: feature attribution methods from the lens of statistics 💻Website: tverven.github.io/tiai-seminar/ 📷Date: 4 Feb, 4pm CET/ 10am ET Suraj Srinivas @ ICML Tim van Erven Jeremias Sulam




⏰⏰Theory of Interpretable AI Seminar ⏰⏰ Tuesday in two weeks we will hear from Atticus Geiger on "The Current State of Interpretability and Ideas for Scaling Up" Website: tverven.github.io/tiai-seminar/ Date: 4 March, 4pm CET/ 10am ET Suraj Srinivas @ ICML Tim van Erven



Join us tomorrow for a talk by Atticus Geiger on the current state of (mechanistic) interpretability, and the path moving forward! Michal Moshkovitz Tim van Erven

Couldn't make it to Atticus's talk last week on the current state of interpretability? No worries! Fortunately for you, the talk is rescheduled to Wednesday this week😄 Please do join us! 🌐: tverven.github.io/tiai-seminar/ Michal Moshkovitz Tim van Erven



⏰⏰Theory of Interpretable AI Seminar ⏰⏰ In two weeks, April 8, Mirco Mutti Mirco Mutti will talk about "A Classification View on Meta Learning Bandits" Tim van Erven Suraj Srinivas @ ICML


Can we get a *short* and *interpretable* policy for multi-armed bandits that is guaranteed to perform well? Mirco Mutti will present our (w/ shiemannor and Jeongyeol Kwon) recent work on this cool new problem in the Theory of Interpretable AI today! (zoom link below)

Today in **two hours** Mirco Mutti will talk about interpretable bandits Zoom link: uva-live.zoom.us/j/87120549999 Suraj Srinivas @ ICML Tim van Erven


⏰⏰Theory of Interpretable AI Seminar ⏰⏰ Interested in Feature Attribution Explanations? In two weeks, May 6, Gunnar König Gunnar König will talk about "Disentangling Interactions and Dependencies in Feature Attribution" Tim van Erven Suraj Srinivas @ ICML


Curious about feature attribution? SHAP & LIME treat features independently—but features interact! Come hear how to "Disentangle Interactions and Dependencies in Feature Attribution" Tuesday (tomorrow!) 4pm CET, 10am ET Suraj Srinivas @ ICML Tim van Erven

⏰⏰ Theory of Interpretable AI Seminar ⏰⏰ Chain-of-Thought: Why does explaining to LLMs using CoT prompting work? Join us on June 3, when Bohang Zhang @ICLR 2024 will dive into the mechanisms behind chain-of-thought prompting — and what makes it so effective Tim van Erven Suraj Srinivas @ ICML


Why does Chain of Thought prompting actually work? Bohang Zhang @ICLR 2024 will be talking about it today. Join us! Suraj Srinivas @ ICML Tim van Erven

How can we use explainability methods when we can’t trust the explainer? Check out ExpProof paper by Chhavi Yadav Kamalika Chaudhuri Dan Boneh Evan Monroe Laufer