
Anirban Ray
@anirbanray_
PhD Student working on bioimaging inverse problems in @jug_lab at @humantechnopole + @tudresden_de | Prev: computer vision @Hitachi R&D, Tokyo. Bsky: anirbanray
ID: 1034947411154595840
https://humantechnopole.it/en/people/anirban-ray/ 29-08-2018 23:34:26
1,1K Tweet
858 Followers
1,1K Following


Our new preprint on protein sequence design using all-atom SELFIES representation and discrete diffusion models is out!🎉Thanks Biology+AI Daily for the great summary👇


New paper on the generalization of Flow Matching arxiv.org/abs/2506.03719 🤯 Why does flow matching generalize? Did you know that the flow matching target you're trying to learn **can only generate training points**? with Quentin Bertrand, Anne Gagneux & Rémi Emonet 👇👇👇

How do task dynamics impact learning in networks with internal dynamics? Excited to share our ICML Oral paper on learning dynamics in linear RNNs! with Clémentine Dominé 🍊 Murray Shanahan Pedro Mediano openreview.net/forum?id=KGOcr…

🚫 No dyes. No bleaching. 🔬 Just AI + label-free microscopy = vivid virtually stained images New in Nature Machine Intelligence: A deep learning model that enables robust virtual staining across microscopes, cell types & conditions. #CZBiohubSF Shalin Mehta explains:

Cells are dynamic, messy and context dependent. Scaling models across diverse states needs flexibility to capture heterogeneity Introducing State, a transformer that predicts perturbation effects by training over sets of cells Team effort led by the unstoppable Abhinav Adduri




This is an attention variant worth looking at! Seems to be inspired by prior work of Daniel Murfet that wasn't explored very much in practice, I wonder what other gems are just waiting to be scaled up.





Excited to announce my work during Google AI internship is now on arXiv! Grateful for my incredible hosts and collaborators: mauricio delbracio, Peyman Milanfar, Kangfu Mei, Mojtaba Sahraee-Ardakan and Ulugbek S. Kamilov. Please check out the paper here: [arxiv.org/abs/2507.05604].



François Fleuret One way around this might be to have a system that penalizes methodological novelty: your reward is the open problems you solved minus the new methods you had to introduce to do so. I think that could be fun to try as a workshop competition or something.

1/3) I am biased, but I think this is going to be big! CoVAE: Consistency Training of Variational Autoencoders We unify consistency models with VAEs to obtain a powerful and elegant generative autoencoder! The brainchild of the brilliant Gianluigi Silvestri (who is looking for jobs!)


From GPT to MoE: I reviewed & compared the main LLMs of 2025 in terms of their architectural design from DeepSeek-V3 to Kimi 2. Multi-head Latent Attention, sliding window attention, new Post- & Pre-Norm placements, NoPE, shared-expert MoEs, and more... magazine.sebastianraschka.com/p/the-big-llm-…

🚨Fundamental computer science result! 🤩🤩🤩 Particle Methods, a class of algorithms, can compute anything that is computable, even when severely resource restricted ‼️‼️Formally proven result Center for Systems Biology Dresden IMPRS-CellDevoSys MPI-CBG Dresden now out: fi.episciences.org/16055