Moritz H (@moritz_haa) 's Twitter Profile
Moritz H

@moritz_haa

IMPRS-IS PhD student with @UlrikeLuxburg and @bedartha, Theory of Machine Learning, ML in Climate Science

ID: 1650087167899697153

calendar_today23-04-2023 10:40:23

6 Tweet

34 Followers

116 Following

Ulrike Luxburg (@ulrikeluxburg) 's Twitter Profile Photo

"Pitfalls of Climate Network Construction: A Statistical Perspective". With Moritz H and Bedartha Goswami, we show that many observations in climate networks might be spurious artifacts. And I am sure the same would hold for brain networks. doi.org/10.1175/JCLI-D…

"Pitfalls of Climate Network Construction: A Statistical
Perspective". With <a href="/moritz_haa/">Moritz H</a> and <a href="/bedartha/">Bedartha Goswami</a>, we show that many observations in climate networks might be spurious artifacts.   
And I am sure the same would hold for brain networks.

doi.org/10.1175/JCLI-D…
Moritz H (@moritz_haa) 's Twitter Profile Photo

Many thanks to the organizers! I learned a lot, gained many new perspectives and had a lot of fun along the way! An invaluable experience.

David Holzmüller (@dholzmueller) 's Twitter Profile Photo

Can kernel methods and wide neural networks overfit benignly in fixed dimension? In our NeurIPS 2023 paper with Moritz H, Ulrike Luxburg, and Ingo Steinwart, we prove: yes, but only with some non-standard modifications. 1/ 🧵

Can kernel methods and wide neural networks overfit benignly in fixed dimension?

In our NeurIPS 2023 paper with <a href="/moritz_haa/">Moritz H</a>, <a href="/UlrikeLuxburg/">Ulrike Luxburg</a>, and Ingo Steinwart, we prove: yes, but only with some non-standard modifications. 1/ 🧵
Ulrike Luxburg (@ulrikeluxburg) 's Twitter Profile Photo

📷 Benign overfitting even works in small, fixed dimension! Not with ReLu activation function or smooth kernels, but with extra-spiky variants. #neurips2023 paper with Moritz H, David Holzmüller, Ingo Steinwart: arxiv.org/abs/2305.14077

📷
Benign overfitting even works in small, fixed dimension! Not with ReLu activation function or smooth kernels, but with extra-spiky variants.  #neurips2023 paper with <a href="/moritz_haa/">Moritz H</a>, <a href="/DHolzmueller/">David Holzmüller</a>,  Ingo Steinwart:
arxiv.org/abs/2305.14077
Moritz H (@moritz_haa) 's Twitter Profile Photo

I will present our work on benign overfitting of kernels and neural networks at #NeurIPS2023 on Thursday 5pm (#1724). Feel free to reach out if you want to chat about (deep) learning theory towards principled deep learning, also beyond kernels. Find a thread below.