alex negrón (@algomage) 's Twitter Profile
alex negrón

@algomage

—— mathematical science @Princeton PACM —— probability ∪ deep learning ∪ complexity

ID: 1575875701764665346

calendar_today30-09-2022 15:51:51

268 Tweet

69 Followers

698 Following

Jason Lee (@jasondeanlee) 's Twitter Profile Photo

New work arxiv.org/abs/2506.05500 on learning multi-index models with Alex Damian and Joan Bruna. Multi-index are of the form y= g(Ux), where U=r by d maps from d dimension to r dimension and d>>r. g is an arbitrary function. Examples of multi-index models are any neural net

Trevor McCourt (@trevormccrt1) 's Twitter Profile Photo

Looking to start a probabilistic machine learning residency program at Extropic! Would very much appreciate connection to math/stats/physics PhD students/postdocs that would enjoy working on a contrarian approach to large-scale AI.

Physical Review Letters (@physrevlett) 's Twitter Profile Photo

A spring–block phenomenological model with asymmetric friction elucidates the role of nonlinearity and randomness in the theory of feature learning for deep neural networks go.aps.org/3ZTIRwL

A spring–block phenomenological model with asymmetric friction elucidates the role of nonlinearity and randomness in the theory of feature learning for deep neural networks
go.aps.org/3ZTIRwL
SIAM Activity Group on Dynamical Systems (@dynamicssiam) 's Twitter Profile Photo

Book chapter: "A Dynamical Systems Perspective on the Analysis of Neural Networks" (by Dennis Chemnitz, Maximilian Engel, Christian Kuehn, Sara-Viola Kuntz): arxiv.org/abs/2507.05164

florence (@morallawwithin) 's Twitter Profile Photo

Being a grad student is peak human existence. The end goal of all political and technological progress should be allowing everyone to be a grad student forever

arXiv math.PR Probability (@mathprb) 's Twitter Profile Photo

Rados{\l}aw Adamczak: Concentration of measure for non-linear random matrices with applications to neural networks and non-commutative polynomials arxiv.org/abs/2507.07625 arxiv.org/pdf/2507.07625 arxiv.org/html/2507.07625

Cengiz Pehlevan (@cpehlevan) 's Twitter Profile Photo

Great to see this one finally out in PNAS! Asymptotic theory of in-context learning by linear attention pnas.org/doi/10.1073/pn… Many thanks to my amazing co-authors Yue Lu, Mary Letey, Jacob Zavatone-Veth and Anindita Maiti

Akshat (@star_stufff) 's Twitter Profile Photo

An excellent intro to advanced linear algebra by Teo Bancha. Goes right upto lie groups, random matrices and curved space-times. arxiv.org/abs/2506.18666

Daniel Litt (@littmath) 's Twitter Profile Photo

kind of crazy that literally every time I think to myself “ok, I probably don’t need to learn this right now” it ends up being fundamentally important to my research a couple years later

SIAM Activity Group on Dynamical Systems (@dynamicssiam) 's Twitter Profile Photo

"From Kinetic Theory to AI: a Rediscovery of High-Dimensional Divergences and Their Properties" (by Gennaro Auricchio, Giovanni Brigati, Paolo Giudici, Giuseppe Toscani): arxiv.org/abs/2507.11387

Yi Ma (@yimatweets) 's Twitter Profile Photo

After studying the mathematics and computation of Sparsity for nearly 20 years, I have just realized that it is much more important than I ever realized before. It truly serves as *the* model problem to understand deep networks and even intelligence to a large extent, from a

arXiv math.PR Probability (@mathprb) 's Twitter Profile Photo

Klartag, Lehec: Thin-shell bounds via parallel coupling arxiv.org/abs/2507.15495 arxiv.org/pdf/2507.15495 arxiv.org/html/2507.15495

Andy Keller (@t_andy_keller) 's Twitter Profile Photo

Why do video models handle motion so poorly? It might be lack of motion equivariance. Very excited to introduce: Flow Equivariant RNNs (FERNNs), the first sequence models to respect symmetries over time. Paper: arxiv.org/abs/2507.14793 Blog: kempnerinstitute.harvard.edu/research/deepe… 1/🧵

Phillip Isola (@phillip_isola) 's Twitter Profile Photo

Really interesting paper. Fits the theme that we should make our modeling problems harder, not easier, so that they are forced to learn more and generalize better.

SIAM Activity Group on Dynamical Systems (@dynamicssiam) 's Twitter Profile Photo

Tutorial: "Building Intuition for Dynamical Mean-Field Theory: A Simple Model and the Cavity Method" (by Emmy Blumenthal): arxiv.org/abs/2507.16654