Mathias Niepert (@mniepert) 's Twitter Profile
Mathias Niepert

@mniepert

Professor @ University of Stuttgart, Scientific Advisor @ NEC Labs, GraphML, geometric deep learning, ML for Science. Formerly @IUBloomington and @uwcse

ID: 1872684109

linkhttp://www.matlog.net calendar_today16-09-2013 18:48:59

1,1K Tweet

2,2K Followers

459 Following

Duy H. M. Nguyen (@duyhmnguyen1) 's Twitter Profile Photo

🤔 What if a simple initialization trick in Transformers wasn’t just an engineering hack, but a theoretically optimal solution? 🚀 Thrilled to share our latest research: "𝐎𝐧 𝐙𝐞𝐫𝐨-𝐈𝐧𝐢𝐭𝐢𝐚𝐥𝐢𝐳𝐞𝐝 𝐀𝐭𝐭𝐞𝐧𝐭𝐢𝐨𝐧: 𝐎𝐩𝐭𝐢𝐦𝐚𝐥 𝐏𝐫𝐨𝐦𝐩𝐭 𝐚𝐧𝐝 𝐆𝐚𝐭𝐢𝐧𝐠

Christopher Morris (@chrsmrrs) 's Twitter Profile Photo

If you are in the Bay Area, consider attending our workshop, "Graph Learning Meets TCS," at the Simons Institute (simons.berkeley.edu/workshops/grap…).

Viktor Zaverkin🇺🇦 (@viktorzaverkin) 's Twitter Profile Photo

🚨 New preprint: How well do universal ML potentials perform in biomolecular simulations under realistic conditions? There's growing excitement around ML potentials trained on large datasets. But do they deliver in simulations of biomolecular systems? It’s not so clear. 🧵 1/

🚨 New preprint: How well do universal ML potentials perform in biomolecular simulations under realistic conditions?

There's growing excitement around ML potentials trained on large datasets.
But do they deliver in simulations of biomolecular systems?

It’s not so clear. 🧵

1/
Viktor Zaverkin🇺🇦 (@viktorzaverkin) 's Twitter Profile Photo

BUT: These improvements do not consistently translate into more accurate physical observables in simulations. Densities, radial distribution functions, and conformational ensembles show inconsistent trends with model size and long-range electrostatics. 7/

Corin Wagen (@corinwagen) 's Twitter Profile Photo

NNPs are becoming very accurate for gas-phase small molecules. But can current approaches scale to the condensed phase, or will scale inevitably create new challenges? Today on the blog, we review how scientists are incorporating long-range forces into NNPs.

NNPs are becoming very accurate for gas-phase small molecules. But can current approaches scale to the condensed phase, or will scale inevitably create new challenges? 

Today on the blog, we review how scientists are incorporating long-range forces into NNPs.
Mircea Petrache (@mirceasci) 's Twitter Profile Photo

Hey it happened again! 2 tenure-track positions at UC Chile, in the maths/stats department. We have good growing links with the dept of computational engineering too, so people wanting to fill the bridge from maths to CS are welcome. mathjobs.org/jobs/list/27012

Fabian Zills (@pythonfz) 's Twitter Profile Photo

Which foundation MLIP is best suited for your application? The MLIPX framework can help you answering this question. Checkout the Paper iopscience.iop.org/article/10.108… and github.com/basf/mlipx #MLIPX

Petar Veličković (@petarv_93) 's Twitter Profile Photo

The EEML is coming to Podgorica 🇲🇪 on 8 November! Mark your calendars 🚀 Beyond excited to share that we're organising the Montenegrin ML Workshop (MMLW'25), part of EEML Workshop Series, together with Montenegrin AI Association ❤️ (Free) registration required -- please see below!

The <a href="/EEMLcommunity/">EEML</a> is coming to Podgorica 🇲🇪 on 8 November! Mark your calendars 🚀

Beyond excited to share that we're organising the Montenegrin ML Workshop (MMLW'25), part of EEML Workshop Series, together with <a href="/aisocietyme/">Montenegrin AI Association</a> ❤️

(Free) registration required -- please see below!
ICLR 2025 (@iclr_conf) 's Twitter Profile Photo

We’ve received A LOT OF submissions this year 🤯🤯 and are excited to see so much interest! To ensure high-quality review, we are looking for more dedicated reviewers. If you'd like to help, please sign up here docs.google.com/forms/d/e/1FAI…

Yuyang Wang (@yuyangw95) 's Twitter Profile Photo

New preprint & open-source! 🚨 “SimpleFold: Folding Proteins is Simpler than You Think” (arxiv.org/abs/2509.18480). We ask: Do protein folding models really need expensive and domain-specific modules like pair representation? We build SimpleFold, a 3B scalable folding model solely

New preprint &amp; open-source! 🚨 “SimpleFold: Folding Proteins is Simpler than You Think” (arxiv.org/abs/2509.18480). We ask: Do protein folding models really need expensive and domain-specific modules like pair representation? We build SimpleFold, a 3B scalable folding model solely
Bruno Trentini (@brtrentini) 's Twitter Profile Photo

Call for Papers –Machine Learning for Simulations in Biology & Chemistry (Simbiochem) Workshop @ EurIPS 2025 📍 Copenhagen, Denmark (Dec 6th or 7th) 📅 Submission Deadline: 10th October, 2025 👉 Submit your paper: simbiochem.com Machine learning has transformed

Samir dar (@samir_darouich) 's Twitter Profile Photo

I just read the Equilibrium Matching (EqM) paper; it’s excellent and insightful work! Interestingly, we recently published a related method called Adaptive Equilibrium Flow Matching (AEFM). Leaving out “adaptive” reveals strong conceptual parallels between the two approaches.

Mathias Niepert (@mniepert) 's Twitter Profile Photo

Really interesting comparison between recent equilibrium flow matching and equilibrium matching papers arxiv.org/abs/2507.16521

Rishabh Anand 🧬 (@rishabh16_) 's Twitter Profile Photo

"Equivariance matters even more at larger scales" ~ arxiv.org/abs/2510.09768 All the more reason we need scalable architectures with symmetry awareness. I know this is an obvious ask but I'm still confident that scaling and inductive bias need not be at odds. This paper

"Equivariance matters even more at larger scales" ~ arxiv.org/abs/2510.09768 

All the more reason we need scalable architectures with symmetry awareness. I know this is an obvious ask but I'm still confident that scaling and inductive bias need not be at odds. 

This paper
Erik Bekkers (@erikjbekkers) 's Twitter Profile Photo

As promised after our great discussion, CHAITANYA JOSHI! Your inspiring post led to our formal rejoinder: the Platonic Transformer. What if the "Equivariance vs. Scale" debate is a false premise? Our paper shows you can have both. 📄 Preprint: arxiv.org/abs/2510.03511 1/9

As promised after our great discussion, <a href="/chaitanyakjoshi/">CHAITANYA JOSHI</a>! Your inspiring post led to our formal rejoinder: the Platonic Transformer.  

What if the "Equivariance vs. Scale" debate is a false premise? Our paper shows you can have both.

📄 Preprint: arxiv.org/abs/2510.03511

1/9