Johannes Brandstetter (@jo_brandstetter) 's Twitter Profile
Johannes Brandstetter

@jo_brandstetter

Founder and Chief Scientist @ Emmi AI, Ass. Prof / Group Lead @jkulinz. Former @MSFTResearch, @UvA_Amsterdam, @CERN, @tu_wien

ID: 1042167698908700672

linkhttps://brandstetter-johannes.github.io calendar_today18-09-2018 21:45:16

846 Tweet

4,4K Followers

654 Following

Peyman Milanfar (@docmilanfar) 's Twitter Profile Photo

Michael Jordan gave a short, excellent, and provocative talk recently in Paris - here's a few key ideas - It's all just machine learning (ML) - the AI moniker is hype - The late Dave Rumelhart should've received a Nobel prize for his early ideas on making backprop work 1/n

Michael Jordan gave a short, excellent, and provocative talk recently in Paris - here's a few key ideas

- It's all just machine learning (ML) - the AI moniker is hype 

- The late Dave Rumelhart should've received a Nobel prize for his early ideas on making backprop work

1/n
Carlos Vonessen (@carlos_vonessen) 's Twitter Profile Photo

🌶️spicy paper alert🌶️ We present TABASCO, a new model for small molecule generation that achieves state-of-the-art PoseBusters validity while also being ~10x faster This is all achieved despite ✖️no equivariance ✖️no self-conditioning ✖️no bond modelling what we found🧵(1/n)

🌶️spicy paper alert🌶️

We present TABASCO, a new model for small molecule generation that achieves state-of-the-art PoseBusters validity while also being ~10x faster

This is all achieved despite
✖️no equivariance
✖️no self-conditioning
✖️no bond modelling

what we found🧵(1/n)
Jerry Liu (@jerrywliu) 's Twitter Profile Photo

1/10 ML can solve PDEs – but precision🔬is still a challenge. Towards high-precision methods for scientific problems, we introduce BWLer 🎳, a new architecture for physics-informed learning achieving (near-)machine-precision (up to 10⁻¹² RMSE) on benchmark PDEs. 🧵How it works:

Albert Gu (@_albertgu) 's Twitter Profile Photo

I converted one of my favorite talks I've given over the past year into a blog post. "On the Tradeoffs of SSMs and Transformers" (or: tokens are bullshit) In a few days, we'll release what I believe is the next major advance for architectures.

I converted one of my favorite talks I've given over the past year into a blog post.

"On the Tradeoffs of SSMs and Transformers"
(or: tokens are bullshit)

In a few days, we'll release what I believe is the next major advance for architectures.
Floor Eijkelboom (@feijkelboom) 's Twitter Profile Photo

Flow Matching (FM) is one of the hottest ideas in generative AI - and it’s everywhere at #ICML2025. But what is it? And why is it so elegant? 🤔 This thread is an animated, intuitive intro into (Variational) Flow Matching - no dense math required. Let's dive in! 🧵👇

Michael A. Arouet (@michaelaarouet) 's Twitter Profile Photo

Just in case someone was wondering why many Chinese products are becoming better and cheaper than European products, and it’s not only cars. Europe prefers to spend money on welfare and green projects instead. Good luck.

Just in case someone was wondering why many Chinese products are becoming better and cheaper than European products, and it’s not only cars. Europe prefers to spend money on welfare and green projects instead. Good luck.
Alberto Maria Pepe (@albertomariape) 's Twitter Profile Photo

Bit of a late update, but I finally had my PhD viva! It was an honour and a privilege to have my thesis read and examined by Petar Veličković and Johannes Brandstetter, two giants in Geometric Deep Learning. It will soon be online after corrections :)

Bit of a late update, but I finally had my PhD viva!
It was an honour and a privilege to have my thesis read and examined by <a href="/PetarV_93/">Petar Veličković</a> and <a href="/jo_brandstetter/">Johannes Brandstetter</a>, two giants in Geometric Deep Learning. 

It will soon be online after corrections :)