David Klindt (@klindt_david) 's Twitter Profile
David Klindt

@klindt_david

NeuroAI | assistant professor @CSHL | views my own

ID: 732494566545203201

linkhttps://www.cshl.edu/research/faculty-staff/david-klindt/ calendar_today17-05-2016 08:54:48

584 Tweet

1,1K Followers

1,1K Following

David Klindt (@klindt_david) 's Twitter Profile Photo

ICML 2/2 🤩 Charlie O'Neill et al show with theory/experiments: sparse coding > SAEs arxiv.org/abs/2411.13117 Patrik Reizinger et al argue for empirical identifiability theory arxiv.org/abs/2504.13101 Congrats on great science & big thanks to AC+reviewers for their time and effort 🙏

Charlie O'Neill (@charles0neill) 's Twitter Profile Photo

1/🧵 Thrilled to announce our ICML‑accepted work (done in my undergrad days with @david_klindt and Alim Gumran) that questions the linear‑only dogma in sparse autoencoders👇

David Klindt (@klindt_david) 's Twitter Profile Photo

Science is a long arduous process of critical thinking, hypothesis testing and iteration that, maybe sometimes, in the end leads to the extraction of a tiny drop of new knowledge from nature. That being said, good luck to everyone scrambling for NeurIPS, you've got 10min 😋

UniReps (@unireps) 's Twitter Profile Photo

🎉 We’re thrilled to announce the first ELLIS × Unireps Speaker Series (Reading Group associated with ELLIS PhD & Postdoc Program) session is happening soon! 🔜 🗓️ When: 29th May 2025 – CET 📍 Where: ethz.zoom.us/j/66426188160 🎙️ Who: Andrew Lampinen (Keynote) &

🎉 We’re thrilled to announce the first <a href="/ELLISforEurope/">ELLIS</a> × Unireps Speaker Series (Reading Group associated with ELLIS PhD &amp; Postdoc Program) session is happening soon! 🔜

🗓️ When: 29th May 2025 –  CET
📍 Where: ethz.zoom.us/j/66426188160
🎙️ Who: <a href="/AndrewLampinen/">Andrew Lampinen</a> (Keynote) &amp;
David Klindt (@klindt_david) 's Twitter Profile Photo

Why do both #ChatGPT and #Gemini UI struggle so much with rendering equations? They output beautiful, compilable latex but their UIs still manage to screw up every other equation in the app

Micha Heilbron (@m_heilb) 's Twitter Profile Photo

New preprint, on prediction in vision – in mice! We've found that visual cortex, even when just viewing natural scenes, predicts *higher-level* visual features The aligns with recent developments in ML, but challenges some assumptions about early sensory cortex

New preprint, on prediction in vision – in mice!

We've found that visual cortex, even when just viewing natural scenes, predicts *higher-level* visual features

The aligns with recent developments in ML, but challenges some assumptions about early sensory cortex
Victor Veitch 🔸 (@victorveitch) 's Twitter Profile Photo

That's not to say things are helpless. We've gone through the wave of SAE hype and skepticism before in the context of causal representation learning. Just like with the LLM case, the hype was touched off with a set of very compelling empirical results, followed by an enormous

Florentin Guth (@florentinguth) 's Twitter Profile Photo

What is the probability of an image? What do the highest and lowest probability images look like? Do natural images lie on a low-dimensional manifold? In a new preprint with Zahra Kadkhodaie Eero Simoncelli, we develop a novel energy-based model in order to answer these questions: 🧵

What is the probability of an image? What do the highest and lowest probability images look like? Do natural images lie on a low-dimensional manifold?
In a new preprint with <a href="/ZKadkhodaie/">Zahra Kadkhodaie</a> <a href="/EeroSimoncelli/">Eero Simoncelli</a>, we develop a novel energy-based model in order to answer these questions: 🧵
Nina Miolane 🦋 @ninamiolane.bsky.social (@ninamiolane) 's Twitter Profile Photo

The era of artificial scientific intelligence is here. As algorithms generate discoveries at scale, what role remains for human scientists?🤔 Thanks PLOS Biology for publishing my perspective Real AI @ UCSB UCSB ECE UC Santa Barbara's College of Engineering UC Santa Barbara ! journals.plos.org/plosbiology/ar…

Mark Ibrahim (@marksibrahim) 's Twitter Profile Photo

A good language model should say “I don’t know” by reasoning about the limits of its knowledge. Our new work AbstentionBench carefully measures this overlooked skill in leading models in an open-codebase others can build on! We find frontier reasoning degrades models’ ability to

A good language model should say “I don’t know” by reasoning about the limits of its knowledge. Our new work AbstentionBench carefully measures this overlooked skill in leading models in an open-codebase others can build on!

We find frontier reasoning degrades models’ ability to
Randall Balestriero (@randall_balestr) 's Twitter Profile Photo

Who got time to wait for delayed generalization (grokking)? We introduce GrokAlign, a provable solution to speed up the alignment between your model and your training data resulting in faster convergence + visual probing of your DN! Ofc it uses splines :) arxiv.org/abs/2506.12284

Who got time to wait for delayed generalization (grokking)? We introduce GrokAlign, a provable solution to speed up the alignment between your model and your training data resulting in faster convergence + visual probing of your DN! Ofc it uses splines :)
arxiv.org/abs/2506.12284
Simone Scardapane (@s_scardapane) 's Twitter Profile Photo

*From superposition to sparse codes: interpretable representations in NNs* by David Klindt Nina Miolane 🦋 @ninamiolane.bsky.social Patrik Reizinger Charlie O'Neill Nice overview on the linearity of NN representations and the use of sparse coding to recover interpretable activations. arxiv.org/abs/2503.01824

*From superposition to sparse codes: interpretable representations in NNs*
by <a href="/klindt_david/">David Klindt</a> <a href="/ninamiolane/">Nina Miolane 🦋 @ninamiolane.bsky.social</a> <a href="/rpatrik96/">Patrik Reizinger</a> <a href="/charles0neill/">Charlie O'Neill</a> 

Nice overview on the linearity of NN representations and the use of sparse coding to recover interpretable activations.

arxiv.org/abs/2503.01824
Neel Nanda (@neelnanda5) 's Twitter Profile Photo

VLMs are wild. You can glue a vision model to an LLM?! What's up? The key is that both models learn universal representations. Just training a linear projection, is enough! But image embeddings align only with LATE layer language activations as shown by LLM SAEs, causing issues

Sébastien Lachapelle (@seblachap) 's Twitter Profile Photo

My thesis is now online! umontreal.scholaris.ca/items/f8670d1c… This is more than just a list of publications. I invested a lot of time and passion writing this thesis in hope that it will make for an interesting read. Here's a summary of what you'll find in it.

My thesis is now online!
umontreal.scholaris.ca/items/f8670d1c…

This is more than just a list of publications. I invested a lot of time and passion writing this thesis in hope that it will make for an interesting read.

Here's a summary of what you'll find in it.