
Joey Bose
@bose_joey
Post-doc @UniofOxford w/@mmbronstein. Into Geometry ∩ Generative Models and AI4Science. @Mila_Quebec Affiliate member. Phd from @Mila_Quebec / McGill.
ID: 950132983083671553
https://joeybose.github.io/ 07-01-2018 22:32:10
1,1K Tweet
2,2K Followers
270 Following



A great collab with former labmates Anthony GX-Chen & Dongyan! Interesting cognitive limitation in LMs: strong disjunctive bias leads to poor performance on conjunctive causal inference tasks. Mirrors adult human biases—possibly a byproduct of training data priors.

What a bombshell of a paper 💣 Not unexpected from the prodigious Damien Ferbach 🎉

🎙️ Early career researchers: Athanasios Litsios, Bo Xia, Xinyi Zhang, Yakir Reshef, Yichen Si, Joey Bose, Sandeep Kambhampati, Pınar Demetçi, Michelle M. Li (李敏蕊). Learn more about the symposium in our recap. Broad Institute

Great discussion, Chaitanya K. Joshi! We also explored this with extensive experiments in our recent paper: arxiv.org/abs/2501.01999. We find, among others, that equiv mods in a sense scale even better than non-equiv ones. Going more or less completely against the vibes from your post😅1/5






🚨New paper on RetroSynthesis Planning with Discrete Diffusion/Flow Matching. We find that training these models with Synthons + Inference Reward Steering is super powerful. Check out the thread 👇 Led by the amazing Robin Yadav and an awesome group of collaborators at UBC.




Normalizing flow can be strong vision generative models. Check latest STARFlow work led by Jiatao Gu @ CVPR2025


🧵(1/6) Delighted to share our ICML Conference 2025 spotlight paper: the Feynman-Kac Correctors (FKCs) in Diffusion Picture this: it’s inference time and we want to generate new samples from our diffusion model. But we don’t want to just copy the training data – we may want to sample
