Robert Bamler (@robamler) 's Twitter Profile
Robert Bamler

@robamler

Professor of Data Science and Machine Learning at @uni_tue, member of @ml4science and Tübingen AI Center.

ID: 397152500

linkhttps://robamler.github.io calendar_today24-10-2011 09:36:42

34 Tweet

230 Followers

25 Following

Robert Bamler (@robamler) 's Twitter Profile Photo

Sehr cool, dass das RHET AI morgen eine kostenlose Kinovorführung anbietet! Gezeigt wird "Ex Machina" im Arsenal Kino Tübingen morgen (Freitag) um 20:00. Und wer mag, kann danach für eine Nachbesprechung bleiben. Ich bin gespannt! eventbrite.de/e/wie-viel-sci…

Robert Bamler (@robamler) 's Twitter Profile Photo

Looking for the one VAE to rule them all? The bad news: it doesn't exist. The good news: our recently accepted ICLR 2023 paper shows how to optimally allocate information to each latent layer depending on your application: arxiv.org/abs/2302.04855 Tim Xiao @ml4science #ICLR2023

Tim Xiao (@timzxiao) 's Twitter Profile Photo

How do LLMs connect to modern computers in zero-shot problem solving abilities and histories? Our latest blog post provides a fresh perspective on understanding LLMs and the prompting paradigm. Check it out! timx.me/blog/2023/comp… Weiyang Liu Robert Bamler #ChatGPT

How do LLMs connect to modern computers in zero-shot problem solving abilities and histories? 

Our latest blog post provides a fresh perspective on understanding LLMs and the prompting paradigm. Check it out!
timx.me/blog/2023/comp… 

<a href="/Besteuler/">Weiyang Liu</a> <a href="/robamler/">Robert Bamler</a> #ChatGPT
Robert Bamler (@robamler) 's Twitter Profile Photo

We just got the green light to hold another workshop on machine-learning-based data compression—this time at ICML. I'm very excited! Stay tuned for details and for the call for papers.

Neural Compression Workshop @ICML23 (@neural_compress) 's Twitter Profile Photo

The 2nd iteration of the "Neural Compression: From Information Theory to Applications" workshop will take place ICML Conference in Hawaii this year! Submissions due May 27th. For more details:neuralcompression.github.io/workshop23 Berivan Isik Yibo Yang Daniel Severo Dr. Karen Ullrich Robert Bamler @s_mandt

Robert Bamler (@robamler) 's Twitter Profile Photo

My student Johannes Zenn found a useful fact about differential sequential Monte Carlo samplers: you can ignore any gradients due to resampling because they vanish in expectation. Check out his accepted ICLR DEI paper and meet us at the poster on Friday. arxiv.org/abs/2304.14390

Johannes Zenn (@johanneszenn) 's Twitter Profile Photo

There is no need for gradients due to resampling in Differentiable Sequential Monte Carlo Samplers! Check out our recent work (arxiv.org/abs/2304.14390) with Robert Bamler and meet us at the poster on Friday!

Robert Bamler (@robamler) 's Twitter Profile Photo

If you're in Kigali for ICLR this week, let's meet and chat over some drinks tomorrow at Tim Xiao's poster on rate/distortion theory of hierarchical VAEs. It's poster #106 in the MH rooms from 4:30 to 6:30. iclr.cc/virtual/2023/p…

Robert Bamler (@robamler) 's Twitter Profile Photo

If you're at ICLR, join my student Johannes Zenn at the Tiny Paper poster session today from 1.15 to 3.15 pm in room MH4. You'll be surprised how many insights can fit in a 2-page paper! arxiv.org/abs/2304.14390

Robert Bamler (@robamler) 's Twitter Profile Photo

Training variational autoencoders on samples from a diffusion model essentially eliminates their known tendency to overfit the encoder without sacrificing model performance. Congrats to my PhD students Tim Xiao and Johannes Zenn on their latest preprint! arxiv.org/abs/2310.19653

Robert Bamler (@robamler) 's Twitter Profile Photo

Did you know that the training/test set split of the SVHN data set is biased, making SVHN unsuitable for evaluating generative models? Learn more from my students Tim Xiao and Johannes Zenn at the DistShift workshop at #NeurIPS2023 tomorrow (10.30 am, room R06-R09).

Tim Xiao (@timzxiao) 's Twitter Profile Photo

🤔What about using an LLM as a function approximator for f(x; θ) where the parameters θ are natural language? 🤔Can we learn θ just like in machine learning (ML) where θ are numerical values? ✨Check out Verbalized ML, where data and models both operate in natural language! 🤩

Robert Bamler (@robamler) 's Twitter Profile Photo

How could online learning apps adapt to learners and improve over time? Even if you're not a machine learning expert, Hanqi Zhou's blog post on our last ICLR paper explains new approaches in simple terms (joint work with 🐘 Álvaro Tejero-Cantero @[email protected] and Charley Wu (@thecharleywu.bsky.social), supported by Theresa Authaler).

Tim Xiao (@timzxiao) 's Twitter Profile Photo

✨ New paper: Flipping Against All Odds We found that large language models (LLMs) can describe probabilities—but fail to sample from them faithfully. Yes, even flipping a fair coin is hard. 🪙 🧵 Here’s what we learned—and how we fixed it. 🔗arxiv.org/abs/2506.09998 1/

✨ New paper: Flipping Against All Odds

We found that large language models (LLMs) can describe probabilities—but fail to sample from them faithfully.

Yes, even flipping a fair coin is hard. 🪙

🧵 Here’s what we learned—and how we fixed it.

🔗arxiv.org/abs/2506.09998

1/
Robert Bamler (@robamler) 's Twitter Profile Photo

Great paper by my students Tim Xiao and Johannes Zenn and collaborators that applies ideas from Monte Carlo sampling to (black-box) LLM execution to turn LLMs into better calibrated stochastic samplers.