Joey Bose (@bose_joey) 's Twitter Profile
Joey Bose

@bose_joey

Post-doc @UniofOxford w/@mmbronstein. Into Geometry ∩ Generative Models and AI4Science. @Mila_Quebec Affiliate member. Phd from @Mila_Quebec / McGill.

ID: 950132983083671553

linkhttps://joeybose.github.io/ calendar_today07-01-2018 22:32:10

1,1K Tweet

2,2K Followers

270 Following

Certified papers at TMLR (@tmlrcert) 's Twitter Profile Photo

New #ExpertCertification: Investigating Generalization Behaviours of Generative Flow Networks Lazar Atanackovic, Emmanuel Bengio openreview.net/forum?id=9L0B5… #gflownets #generative #flow

Ricky T. Q. Chen (@rickytqchen) 's Twitter Profile Photo

We've open sourced Adjoint Sampling! It's part of a bundled release showcasing FAIR's research and open source commitment to AI for science. github.com/facebookresear… x.com/AIatMeta/statu…

Mandana Samiei (@mandanasamiei) 's Twitter Profile Photo

A great collab with former labmates Anthony GX-Chen & Dongyan! Interesting cognitive limitation in LMs: strong disjunctive bias leads to poor performance on conjunctive causal inference tasks. Mirrors adult human biases—possibly a byproduct of training data priors.

Erik Bekkers (@erikjbekkers) 's Twitter Profile Photo

Great discussion, Chaitanya K. Joshi! We also explored this with extensive experiments in our recent paper: arxiv.org/abs/2501.01999. We find, among others, that equiv mods in a sense scale even better than non-equiv ones. Going more or less completely against the vibes from your post😅1/5

Joey Bose (@bose_joey) 's Twitter Profile Photo

Really excited about this new paper. As someone who spent a ton of time training regular flows with MLE and got burned FORT training actually works making flows cool again 🌊

Alex Tong (@alexandertong7) 's Twitter Profile Photo

So excited to share our new paper, FORT! 🎉 We're showing a simple regression approach to train discrete normalizing flows (think CNFs!), which beats conventional maximum likelihood training in performance. Check it out! 👇

Alex Tong (@alexandertong7) 's Twitter Profile Photo

I'm particularly excited about the new opportunities this opens up for new fast architectures that are trained with regression but also have fast and accurate likelihood computation as heavily used in e.g. our work on Boltzmann Generators arxiv.org/abs/2502.18462

Pranam Chatterjee (@pranamanam) 's Twitter Profile Photo

Easily one of the coolest advances of the year! 🔥 A one-step normalizing flow training method with no expensive Jacobian computations? Imagine how efficient sampling of complex bio distributions will be now! 💨 Congrats Alex Tong Joey Bose Michael Bronstein + team! 🙌

Christian A. Naesseth @ ICLR, AABI 🇸🇬 (@canaesseth) 's Twitter Profile Photo

I like the resurgence of "old-school" NFs that is happening. Here to distill CNFs for free-energy estimation. RL x.com/GhugareRaj/sta… Diffusion models (NFDM) x.com/GrigoryBartosh… Latent SDEs (SDE Matching) x.com/canaesseth/sta… #SDE #Diffusion #GenAI #ML #FlowMatching

Joey Bose (@bose_joey) 's Twitter Profile Photo

🚨New paper on RetroSynthesis Planning with Discrete Diffusion/Flow Matching. We find that training these models with Synthons + Inference Reward Steering is super powerful. Check out the thread 👇 Led by the amazing Robin Yadav and an awesome group of collaborators at UBC.

Renjie Liao (@lrjconan) 's Twitter Profile Photo

🚀Our new work (RetroSynFlow) on retrosynthesis: 1) SOTA discrete flow matching that leverages synthons: products->synthons->reactants 2) effective test-time reward steering Led by my amazing students Robin Yadav, Qi Yan, with stellar collaborators Guy Wolf & Joey Bose

Majdi Hassan (@majdi_has) 's Twitter Profile Photo

(1/n)🚨You can train a model solving DFT for any geometry almost without training data!🚨 Introducing Self-Refining Training for Amortized Density Functional Theory — a variational framework for learning a DFT solver that predicts the ground-state solutions for different

Jiatao Gu (@thoma_gu) 's Twitter Profile Photo

In this latest work "STARFlow: Scaling Latent Normalizing Flows for High-resolution Image Synthesis", we show that normalizing flows (with the change of variable formula) can be scaled to synthesize high-resolution & text-conditioned images at diffusion-level quality. (1/n)

In this latest work "STARFlow: Scaling Latent Normalizing Flows for High-resolution Image Synthesis", we show that normalizing flows (with the change of variable formula) can be scaled to synthesize high-resolution & text-conditioned images at diffusion-level quality. (1/n)
Giannis Daras (@giannis_daras) 's Twitter Profile Photo

Announcing Ambient Diffusion Omni — a framework that uses synthetic, low-quality, and out-of-distribution data to improve diffusion models. State-of-the-art ImageNet performance. A strong text-to-image results in just 2 days on 8 GPUs. Filtering ❌ Clever data use ✅

Announcing Ambient Diffusion Omni — a framework that uses synthetic, low-quality, and out-of-distribution data to improve diffusion models.

State-of-the-art ImageNet performance. A strong text-to-image results in just 2 days on 8 GPUs.

Filtering ❌
Clever data use ✅
Marta Skreta (@martoskreto) 's Twitter Profile Photo

🧵(1/6) Delighted to share our ICML Conference 2025 spotlight paper: the Feynman-Kac Correctors (FKCs) in Diffusion Picture this: it’s inference time and we want to generate new samples from our diffusion model. But we don’t want to just copy the training data – we may want to sample

Molei Tao (@moleitaomath) 's Twitter Profile Photo

Generative modeling data with multiple modalities (e.g.continuous,discrete,manifold,constrained)? ppl often tokenize everything into 1 modality->use AR transformer. Want an encoder-free native-multimodal diffusion model? arxiv.org/abs/2506.07903 #icml2025 is a general approach.