David Holzmüller (@dholzmueller) 's Twitter Profile
David Holzmüller

@dholzmueller

Postdoc in machine learning with @BachFrancis @GaelVaroquaux: neural networks, tabular data, atomistic ML, active learning, learning theory. Also on 🦋.

ID: 1212714725571612672

linkhttps://dholzmueller.github.io/ calendar_today02-01-2020 12:38:50

337 Tweet

499 Followers

306 Following

Léo Grinsztajn (@leogrint) 's Twitter Profile Photo

First independent evaluation (by Han-Jia Ye et al.) of RealMLP in a large scale benchmark and.... we're first in rank among all methods 😎 Link at the end of the thread to the benchmark + RealMLP paper + how to try it in 5 mins

First independent evaluation (by <a href="/YhJyehanjia/">Han-Jia Ye</a> et al.) of RealMLP in a large scale benchmark and.... we're first in rank among all methods 😎
Link at the end of the thread to the benchmark + RealMLP paper + how to try it in 5 mins
Fabian Schaipp (@fschaipp) 's Twitter Profile Photo

Learning rate schedules seem mysterious? Turns out that their behaviour can be described with a bound from *convex, nonsmooth* optimization. Short thread on our latest paper 🚇 arxiv.org/abs/2501.18965

NEC Laboratories Europe (@neclabseu) 's Twitter Profile Photo

Our new benchmark – AL4PDE – uses active learning to solve partial differential equations more efficiently with #neuralnetworks. It reduces data needs and lowers errors by up to 71% compared to random sampling. Learn how: neclab.eu/research-group…. #NECLabs #AI #MachineLearning

Our new benchmark – AL4PDE – uses active learning to solve partial differential equations more efficiently with #neuralnetworks. It reduces data needs and lowers errors by up to 71% compared to random sampling. Learn how: neclab.eu/research-group…. #NECLabs #AI #MachineLearning
Leshem Choshen C U @ ICLR 🤖🤗 (@lchoshen) 's Twitter Profile Photo

What if we have been doing early stopping wrong all along? When you break the validation loss into two terms, calibration and refinement you can make the simplest (efficient) trick to stop training in a smarter position Eugène Berta David Holzmüller Michael Jordan Francis Bach

What if we have been doing early stopping wrong all along?
When you break the validation loss into two terms, calibration and refinement
you can make the simplest (efficient) trick to stop training in a smarter position
<a href="/Eugene_Berta/">Eugène Berta</a> <a href="/DHolzmueller/">David Holzmüller</a> Michael Jordan <a href="/BachFrancis/">Francis Bach</a>
Mario 🇺🇦 💙💛 🇮🇱 + Scientific Machine Learning (@scientificml) 's Twitter Profile Photo

Please join this timely workshop on Multi-scale Modeling for (Physical & Chemical) Sciences and Engineering problems. We propose LOGLO-FNO, an improvement to the popular FNO arch, specifically targeted towards modeling high frequencies in turbulence simulations. #ai4science #pde

Lennart Purucker (@lennartpurucker) 's Twitter Profile Photo

🚨What is SOTA on tabular data, really? We are excited to announce 𝗧𝗮𝗯𝗔𝗿𝗲𝗻𝗮, a living benchmark for machine learning on IID tabular data with: 📊 an online leaderboard (submit!) 📑 carefully curated datasets 📈 strong tree-based, deep learning, and foundation models 🧵

🚨What is SOTA on tabular data, really? We are excited to announce 𝗧𝗮𝗯𝗔𝗿𝗲𝗻𝗮, a living benchmark for machine learning on IID tabular data with:

📊 an online leaderboard (submit!)
📑 carefully curated datasets
📈 strong tree-based, deep learning, and foundation models

🧵