Metod Jazbec (@metodjazbec) 's Twitter Profile
Metod Jazbec

@metodjazbec

Machine Learning PhD student at @AmlabUva

ID: 332206751

linkhttp://metodj.github.io calendar_today09-07-2011 11:55:28

41 Tweet

176 Followers

414 Following

Dharmesh Tailor (@dtailor17) 's Twitter Profile Photo

Come check out our #AISTATS2024 oral paper "Learning to Defer to a Population: A Meta-Learning Approach": - Oral Session: Fri 3 May, 4-5pm - Poster: Fri 3 May, 5-7pm. Multipurpose Room 1 #33 w/ Aditya Patra, Rajeev Verma, Putra Manggala, @eric_nalisnick proceedings.mlr.press/v238/tailor24a… 🧵

Come check out our #AISTATS2024 oral paper "Learning to Defer to a Population: A Meta-Learning Approach":

- Oral Session: Fri 3 May, 4-5pm
- Poster: Fri 3 May, 5-7pm. Multipurpose Room 1 #33

w/ Aditya Patra, <a href="/RaajeevVerma/">Rajeev Verma</a>, <a href="/pmangg/">Putra Manggala</a>, @eric_nalisnick

proceedings.mlr.press/v238/tailor24a…
🧵
Sharvaree Vadgama (@sharvvadgama) 's Twitter Profile Photo

📣 This year at #ICML2024 we are hosting ✨ Gram Workshop ✨ Geometry-grounded representation learning and generative modeling. We welcome submissions in multiple tracks i.e. 📄 Proceedings, 🆕 extended abstract, 📝Blogpost/tutorial track as well as🏆 TDA challenge.

Floor Eijkelboom (@feijkelboom) 's Twitter Profile Photo

Flow Matching goes Variational! 🐳 In recent work, we derive a formulation of flow matching as variational inference, obtaining regular FM as a special case. Joint work with dream team Grigory Bartosh, @chris_naesseth, Max Welling, and Jan-Willem van de Meent. 📜arxiv.org/abs/2406.04843 🧵1/11

Mona Schirmer (@monaschir) 's Twitter Profile Photo

Drop by our poster at the #ICML2024 SPIGM Workshop on Friday! With @eric_nalisnick and Dan Zhang we propose probabilistic state-space models to adapt a classifier to distribution shift without seeing any labels. 📍SPIGM Workshop, Lehar 3, 15:10-16:10

Drop by our poster at the #ICML2024 SPIGM Workshop on Friday!

With @eric_nalisnick and <a href="/isDanZhang/">Dan Zhang</a> we propose probabilistic state-space models to adapt a classifier to distribution shift without seeing any labels.

📍SPIGM Workshop, Lehar 3, 15:10-16:10
Xidulu (@xidulu) 's Twitter Profile Photo

LE ATTENTION is now compatible with FlexAttention! Our DAG-based language makes building block-structured attention matrices intuitive, while FlexAttention optimizes execution through sparsity manipulation! FlexAttention is amazing! Horace He

LE ATTENTION is now compatible with FlexAttention!  Our DAG-based language makes building block-structured attention matrices intuitive, while FlexAttention optimizes execution through sparsity manipulation! FlexAttention is amazing! <a href="/cHHillee/">Horace He</a>
Metod Jazbec (@metodjazbec) 's Twitter Profile Photo

Come say hi to me, @AlexTimans, and @eric_nalisnick today between 11am-2pm in East Exhibit Hall A-C #4505 . We'll be presenting our Fast yet Safe paper on efficient inference and risk control 👇

UvA AMLab (@amlabuva) 's Twitter Profile Photo

Generative Uncertainty in Diffusion Models (spotlight) by Metod Jazbec Eliot Wong-Toi, Guoxuan Xia, Dan Zhang, Eric Nalisnick, Stephan Mandt ➡️ openreview.net/forum?id=K54Vk… ⚠️ Quantify Uncertainty and Hallucination in Foundation Models: The Next Frontier in Reliable AI

Generative Uncertainty in Diffusion Models (spotlight)

by <a href="/MetodJazbec/">Metod Jazbec</a> Eliot Wong-Toi, Guoxuan Xia, Dan Zhang, Eric Nalisnick, Stephan Mandt

➡️ openreview.net/forum?id=K54Vk…
⚠️ Quantify Uncertainty and Hallucination in Foundation Models: The Next Frontier in Reliable AI
Metod Jazbec (@metodjazbec) 's Twitter Profile Photo

Great too see that our Generative Uncertainty won the best paper award at the ICLR QUESTION workshop (…certainty-foundation-models.github.io). If you're interested in what Bayesian/ensembling methods can bring to the world of diffusion models, check out the paper 👇 arxiv.org/abs/2502.20946

Great too see that our Generative Uncertainty won the best paper award at the ICLR QUESTION workshop (…certainty-foundation-models.github.io). If you're interested in what Bayesian/ensembling methods can bring to the world of diffusion models, check out the paper 👇

arxiv.org/abs/2502.20946