Michael Churchill (@churchillmic) 's Twitter Profile
Michael Churchill

@churchillmic

Fusion energy computational research engineer/physicist, head of digital engineering @PPPLab #fusion #AI

ID: 1299163639

calendar_today25-03-2013 12:04:12

5,5K Tweet

636 Followers

1,1K Following

Kirill Neklyudov (@k_neklyudov) 's Twitter Profile Photo

🧵(1/5) Have you ever wanted to combine different pre-trained diffusion models but don't have time or data to retrain a new, bigger model? 🚀 Introducing SuperDiff 🦹‍♀️ – a principled method for efficiently combining multiple pre-trained diffusion models solely during inference!

Thea Energy (@theaenergy) 's Twitter Profile Photo

🧲 Our 3x3 superconducting magnet array recently recreated multiple unique iso-surfaces derived from Eos, our first integrated system, within 1% error of simulated predictions via fixed physical hardware & dynamic software controls. #ICYMI, more here: ow.ly/byoE50VyiPc

Slater Stich (@slaterstich) 's Twitter Profile Photo

Very excited to share our interview with Yang Song. This is Part 2 of our history of diffusion series — score matching, the SDE/ODE interpretation, consistency models, and more. Enjoy!

Xun Huang (@xunhuang1995) 's Twitter Profile Photo

The issue indeed lies in GroupNorm: it attempts to remove image statistics, but those stats are crucial for reconstruction. Networks find loopholes—injecting local spikes to manipulate post-normalization stats of the non-spike regions.

Akshay 🚀 (@akshay_pachaar) 's Twitter Profile Photo

Agentic applications require both A2A and MCP. - MCP provides agents with access to tools. - While A2A allows agents to connect with other agents and collaborate in teams. Today, I'll clearly explain what A2A is and how it can work with MCP.

Arash Vahdat (@arashvahdat) 's Twitter Profile Photo

📢 EquiVDM: Equivariant Video Diffusion Models with Temporally Consistent Noise What's the role of equivariance in video diffusion models? and how can warped noise help with it? Is sampling from equivariant video models any easier? Project: research.nvidia.com/labs/genair/eq… w/ Chao Liu.

Felix Koehler @ ICLR'25 ✈️ (@felix_m_koehler) 's Twitter Profile Photo

Travelling to Singapore next week for #ICLR2025 presenting this paper (Sat 3 pm nr. 538): arxiv.org/abs/2502.19611 DM me (Whova, Email or X) if you want to chat about (autoregressive) neural emulators/operators for PDE, autodiff, differentiable physics, numerical solvers etc. 😊

Tatsunori Hashimoto (@tatsu_hashimoto) 's Twitter Profile Photo

You can tell we got maybe a bit too excited when our problem set 1 ended up being a mini-book at 50 pages of content, and covers what many courses would cover across 2 or 3 assignments (github.com/stanford-cs336…)

Stone Tao (@stone_tao) 's Twitter Profile Photo

strangely, in robotics there’s value to doing BC and RL. RL for generating super human data, BC (or vlas these days) to copy that super human data it’s why i still do research on learning from demos, rl, sim. It’s the most underexplored area at the moment with the promise imo

Rudy Gilman (@rgilman33) 's Twitter Profile Photo

The attention layers in the VAEs for FLUX, Stable Diffusion 3.5, and SDXL don't do anything. You can ablate them with almost no effect. At first I thought they might be involved in some clever circuitry—maybe moving global information—but no they're just flailing around doing

Chelsea Finn (@chelseabfinn) 's Twitter Profile Photo

Data curation is crucial for post-training recipes. But how do we curate? Curation is usually manual & tedious. And, it's hard to tell if a strategy in the data will be reliable! We introduce an automatic way to curate, informed by the robot's policy learning.

Minghao Guo (@guomh14) 's Twitter Profile Photo

Excited to announce that TetSphere Splatting (tssplat) has been accepted as an ICLR 2025 oral. We introduce a new geometry representation that leverages an underused yet powerful primitive—volumetric tetrahedral meshes—for high‑quality 3D shape modelling. (1/7)

Tim Duignan (@timothyduignan) 's Twitter Profile Photo

Very nice universal NNP evaluation paper here: openreview.net/forum?id=me0fl… Orb stable for 99.96% of systems simulated. Incredible to me how easy it is to simulate almost any material you can think of starting from quantum mechanics now.

Very nice universal NNP evaluation paper here: openreview.net/forum?id=me0fl…

Orb stable for 99.96% of systems simulated.

Incredible to me how easy it is to simulate almost any material you can think of starting from quantum mechanics now.
Min Choi (@minchoi) 's Twitter Profile Photo

Work by Lvmin Zhang & Maneesh Agrawala Project Page: lllyasviel.github.io/frame_pack_git… Code Repo: github.com/lllyasviel/Fra…

Taylor Sparks (@taylordsparks) 's Twitter Profile Photo

There are few scientists that I've admired more from afar than Tian Xie. I've followed his work since MIT CGCNN days and we have worked in parallel on generative AI for new crystal structures over the last five years. So it was a real pleasure to chat with him on the

There are few scientists that I've admired more from afar than <a href="/xie_tian/">Tian Xie</a>.  I've followed his work since MIT CGCNN days and we have worked in parallel on generative AI for new crystal structures over the last five years. So it was a real pleasure to chat with him on the
Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

Reward-driven algorithms for training dynamical generative models significantly lag behind their data-driven counterparts in terms of scalability. We aim to rectify this. Adjoint Matching poster Carles Domingo-Enrich Sat 3pm & Adjoint Sampling oral Aaron Havens Mon 10am FPI

Thodoris Kouzelis (@thkouz) 's Twitter Profile Photo

1/n Introducing ReDi (Representation Diffusion): a new generative approach that leverages a diffusion model to jointly capture – Low-level image details (via VAE latents) – High-level semantic features (via DINOv2)🧵

1/n Introducing ReDi (Representation Diffusion): a new generative approach that leverages a diffusion model to jointly capture
– Low-level image details (via VAE latents)
– High-level semantic features (via DINOv2)🧵