Anirban Ray (@anirbanray_) 's Twitter Profile
Anirban Ray

@anirbanray_

PhD Student working on bioimaging inverse problems in @jug_lab at @humantechnopole + @tudresden_de | Prev: computer vision @Hitachi R&D, Tokyo. Bsky: anirbanray

ID: 1034947411154595840

linkhttps://humantechnopole.it/en/people/anirban-ray/ calendar_today29-08-2018 23:34:26

1,1K Tweet

858 Followers

1,1K Following

John J. Vastola (@johnjvastola) 's Twitter Profile Photo

Diffusion models generalize *really* well: if you give them a million pictures of cats, they'll learn to generate reasonable-looking cats no one's ever seen before. But the weird thing is that no one knows why they work! In a theory paper accepted to #ICLR2025, I dug into this.

Amelia Villegas (@amelvim_) 's Twitter Profile Photo

Our new preprint on protein sequence design using all-atom SELFIES representation and discrete diffusion models is out!🎉Thanks Biology+AI Daily for the great summary👇

HHMI | Janelia (@hhmijanelia) 's Twitter Profile Photo

🖥️A new method developed by Alex Hillsley Jan Funke & collaborators counts the individual molecules contained in a single spot of light detected by a fluorescence microscope-a quantity important for understanding the underlying biology of a living system janelia.org/news/new-metho…

Mathurin Massias (@mathusmassias) 's Twitter Profile Photo

New paper on the generalization of Flow Matching arxiv.org/abs/2506.03719 🤯 Why does flow matching generalize? Did you know that the flow matching target you're trying to learn **can only generate training points**? with Quentin Bertrand, Anne Gagneux & Rémi Emonet 👇👇👇

Alexandra Proca (@a_proca) 's Twitter Profile Photo

How do task dynamics impact learning in networks with internal dynamics? Excited to share our ICML Oral paper on learning dynamics in linear RNNs! with Clémentine Dominé 🍊 Murray Shanahan Pedro Mediano openreview.net/forum?id=KGOcr…

Chan Zuckerberg Biohub Network (@czbiohub) 's Twitter Profile Photo

🚫 No dyes. No bleaching. 🔬 Just AI + label-free microscopy = vivid virtually stained images New in Nature Machine Intelligence: A deep learning model that enables robust virtual staining across microscopes, cell types & conditions. #CZBiohubSF Shalin Mehta explains:

Yusuf Roohani (@yusufroohani) 's Twitter Profile Photo

Cells are dynamic, messy and context dependent. Scaling models across diverse states needs flexibility to capture heterogeneity Introducing State, a transformer that predicts perturbation effects by training over sets of cells Team effort led by the unstoppable Abhinav Adduri

Cells are dynamic, messy and context dependent. Scaling models across diverse states needs flexibility to capture heterogeneity

Introducing State, a transformer that predicts perturbation effects by training over sets of cells

Team effort led by the unstoppable <a href="/abhinadduri/">Abhinav Adduri</a>
Yaron Lipman (@lipmanya) 's Twitter Profile Photo

**Transition Matching** is a new iterative generative paradigm using Flow Matching or AR models to transition between generation intermediate states, leading to an improved generation quality and speed!

Tony S.F. (@tonysilveti) 's Twitter Profile Photo

This is an attention variant worth looking at! Seems to be inspired by prior work of Daniel Murfet that wasn't explored very much in practice, I wonder what other gems are just waiting to be scaled up.

This is an attention variant worth looking at! Seems to be inspired by prior work of <a href="/danielmurfet/">Daniel Murfet</a> that wasn't explored very much in practice, I wonder what other gems are just waiting to be scaled up.
Sander Dieleman (@sedielem) 's Twitter Profile Photo

Roll call: #ICML2025 diffusion circle 📢 Who's coming? Please tag people that might be interested! Date/time TBD, probably Thursday afternoon. (Beware though👇 joining a diffusion circle is at your own risk!🫣)

Roll call: #ICML2025 diffusion circle 📢 Who's coming? Please tag people that might be interested!

Date/time TBD, probably Thursday afternoon.

(Beware though👇 joining a diffusion circle is at your own risk!🫣)
Giannis Daras (@giannis_daras) 's Twitter Profile Photo

Announcing Ambient Protein Diffusion, a state-of-the-art 17M-params generative model for protein structures. Diversity improves by 91% and designability by 26% over previous 200M SOTA model for long proteins. The trick? Treat low pLDDT AlphaFold predictions as low-quality data

Announcing Ambient Protein Diffusion, a state-of-the-art 17M-params generative model for protein structures.

Diversity improves by 91% and designability by 26% over previous 200M SOTA model for long proteins.

The trick? Treat low pLDDT AlphaFold predictions as low-quality data
Sander Dieleman (@sedielem) 's Twitter Profile Photo

Excellent blog post by Albert Gu about Transformers, SSMs and the role of tokenisation. Well worth a read. goombalab.github.io/blog/2025/trad…

Yuyang Hu (@yuyanghu_666) 's Twitter Profile Photo

Excited to announce my work during Google AI internship is now on arXiv! Grateful for my incredible hosts and collaborators: mauricio delbracio, Peyman Milanfar, Kangfu Mei, Mojtaba Sahraee-Ardakan and Ulugbek S. Kamilov. Please check out the paper here: [arxiv.org/abs/2507.05604].

Floor Eijkelboom (@feijkelboom) 's Twitter Profile Photo

Flow Matching (FM) is one of the hottest ideas in generative AI - and it’s everywhere at #ICML2025. But what is it? And why is it so elegant? 🤔 This thread is an animated, intuitive intro into (Variational) Flow Matching - no dense math required. Let's dive in! 🧵👇

Keyon Vafa (@keyonv) 's Twitter Profile Photo

Can an AI model predict perfectly and still have a terrible world model? What would that even mean? Our new ICML paper formalizes these questions One result tells the story: A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws 🧵

Phillip Isola (@phillip_isola) 's Twitter Profile Photo

François Fleuret One way around this might be to have a system that penalizes methodological novelty: your reward is the open problems you solved minus the new methods you had to introduce to do so. I think that could be fun to try as a workshop competition or something.

Luca Ambrogioni (@lucaamb) 's Twitter Profile Photo

1/3) I am biased, but I think this is going to be big! CoVAE: Consistency Training of Variational Autoencoders We unify consistency models with VAEs to obtain a powerful and elegant generative autoencoder! The brainchild of the brilliant Gianluigi Silvestri (who is looking for jobs!)

1/3) I am biased, but I think this is going to be big!

CoVAE: Consistency Training of Variational  Autoencoders

We unify consistency models with VAEs to obtain a powerful and elegant generative autoencoder!

The brainchild of the brilliant <a href="/gisilvs/">Gianluigi Silvestri</a> (who is looking for jobs!)
Sebastian Raschka (@rasbt) 's Twitter Profile Photo

From GPT to MoE: I reviewed & compared the main LLMs of 2025 in terms of their architectural design from DeepSeek-V3 to Kimi 2. Multi-head Latent Attention, sliding window attention, new Post- & Pre-Norm placements, NoPE, shared-expert MoEs, and more... magazine.sebastianraschka.com/p/the-big-llm-…

MOSAIC Group (@mosaicgroup1) 's Twitter Profile Photo

🚨Fundamental computer science result! 🤩🤩🤩 Particle Methods, a class of algorithms, can compute anything that is computable, even when severely resource restricted ‼️‼️Formally proven result Center for Systems Biology Dresden IMPRS-CellDevoSys MPI-CBG Dresden now out: fi.episciences.org/16055