Nate Gruver (@gruver_nate) 's Twitter Profile
Nate Gruver

@gruver_nate

Machine learning PhD student at NYU, BS & MS @StanfordAILab, Industry @AIatMeta @Waymo

ID: 1119059106231226370

linkhttps://ngruver.github.io/ calendar_today19-04-2019 02:04:16

128 Tweet

638 Followers

276 Following

Taco Cohen (@tacocohen) 's Twitter Profile Photo

🚨 Attention aspiring PhD students: Meta / FAIR is looking for candidates for a joint academic/industry PhD! 🚨 Among others, the CodeGen team is looking for candidates to work on world models for code, discrete search & continuous optimization methods for long-term planning,

🚨 Attention aspiring PhD students: Meta / FAIR is looking for candidates for a joint academic/industry PhD! 🚨

Among others, the CodeGen team is looking for candidates to work on world models for code, discrete search & continuous optimization methods for long-term planning,
Tian Xie (@xie_tian) 's Twitter Profile Photo

Interested in working with a highly collaborative, interdisciplinary team to push the state of the art of generative AI for materials design? Join us as an intern by applying through this link! We are the team behind the MatterGen and MatterSim models from Microsoft Research AI

FAIR Chemistry (@opencatalyst) 's Twitter Profile Photo

Introducing Meta’s Open Materials 2024 (OMat24) Dataset and Models! All under permissive open licenses for commercial and non-commercial use! Paper: arxiv.org/abs/2410.12771 Dataset: huggingface.co/datasets/fairc… Models: huggingface.co/fairchem/OMAT24 🧵1/x

Introducing Meta’s Open Materials 2024 (OMat24) Dataset and Models! All under permissive open licenses for commercial and non-commercial use!

Paper: arxiv.org/abs/2410.12771
Dataset: huggingface.co/datasets/fairc…
Models: huggingface.co/fairchem/OMAT24

🧵1/x
Micah Goldblum (@micahgoldblum) 's Twitter Profile Photo

📢I’ll be admitting multiple PhD students this winter to Columbia University 🏙️ in the most exciting city in the world! If you are interested in dissecting modern deep learning systems to probe how they work, advancing AI safety, or automating data science, apply to my group.

📢I’ll be admitting multiple PhD students this winter to Columbia University 🏙️ in the most exciting city in the world!  If you are interested in dissecting modern deep learning systems to probe how they work, advancing AI safety, or automating data science, apply to my group.
Nathan C. Frey (@nc_frey) 's Twitter Profile Photo

LLMs are highly constrained biological sequence optimizers. In new work led by Angelica Chen & Samuel Stanton , we show how to drive an active learning loop for protein design with an LLM. 1/

LLMs are highly constrained biological sequence optimizers. In new work led by <a href="/_angie_chen/">Angelica Chen</a> &amp; <a href="/samuel_stanton_/">Samuel Stanton</a> , we show how to drive an active learning loop for protein design with an LLM.

1/
Anuroop Sriram (@anuroopsriram) 's Twitter Profile Photo

I’m excited to share our latest work on generative models for materials called FlowLLM. FlowLLM combines Large Language Models and Riemannian Flow Matching in a simple, yet surprisingly effective way for generating materials. arxiv.org/abs/2410.23405 Benjamin Kurt Miller Ricky T. Q. Chen Brandon Wood

Pavel Izmailov (@pavel_izmailov) 's Twitter Profile Photo

I am recruiting Ph.D. students for my new lab at New York University! Please apply, if you want to work with me on reasoning, reinforcement learning, understanding generalization and AI for science. Details on my website: izmailovpavel.github.io. Please spread the word!

I am recruiting Ph.D. students for my new lab at <a href="/nyuniversity/">New York University</a>! Please apply, if you want to work with me on reasoning, reinforcement learning, understanding generalization and AI for science.

Details on my website: izmailovpavel.github.io. Please spread the word!
Nate Gruver (@gruver_nate) 's Twitter Profile Photo

I’ll be at NeurIPS this week! If you want to chat about generative modeling for scientific data, LLM uncertainty quantification, or anything else I’ve worked on, please reach out!

Alan Amin (@alannawzadamin) 's Twitter Profile Photo

New model trained on new dataset of nearly a million evolving antibody families at AIDrugX workshop Sunday at 4:20 pm (#76) #Neurips! Collab between Andrew Gordon Wilson and BigHat Biosciences. Stay tuned for full thread on how we used the model to optimize antibodies in the lab in coming days!

New model trained on new dataset of nearly a million evolving antibody families at AIDrugX workshop Sunday at 4:20 pm (#76) #Neurips! Collab between <a href="/andrewgwils/">Andrew Gordon Wilson</a> and <a href="/BigHatBio/">BigHat Biosciences</a>. Stay tuned for full thread on how we used the model to optimize antibodies in the lab in coming days!
Yair Schiff (@schiffyair) 's Twitter Profile Photo

Diffusion models produce high quality outputs and have powerful guidance mechanisms. Recently, discrete diffusion has shown strong language modeling. ❓Can we adapt guidance mechanisms for discrete diffusion models to enable more controllable discrete generation? 1/13 🧵

Nate Gruver (@gruver_nate) 's Twitter Profile Photo

This project has something for everyone to love! Multi-sequence language models, twisted SMC, wet lab experiments. Go check it out.

Sam Rodriques (@sgrodriques) 's Twitter Profile Photo

Still more amazing professors signing on as potential coadvisors for our FutureHouse Fellowship. Kind of insane: -Debbie Marks (Harvard) Debora Marks -Brian Hie (Stanford/Arc) Brian Hie -Jonathan Gootenberg and Omar Abudayyeh (Jonathan Gootenberg Omar Abudayyeh) -Anima Anandkumar

Dylan Sam (@dylanjsam) 's Twitter Profile Photo

To trust LLMs in deployment (e.g., agentic frameworks or for generating synthetic data), we should predict how well they will perform. Our paper shows that we can do this by simply asking black-box models multiple follow-up questions! w/ Marc Finzi and Zico Kolter 1/ 🧵

To trust LLMs in deployment (e.g., agentic frameworks or for generating synthetic data), we should predict how well they will perform. Our paper shows that we can do this by simply asking black-box models multiple follow-up questions! w/ <a href="/m_finzi/">Marc Finzi</a> and <a href="/zicokolter/">Zico Kolter</a>

1/ 🧵
Raghav Singhal (@_rk_singhal) 's Twitter Profile Photo

Got a diffusion model? What if there were a way to: - Get SOTA text-to-image prompt fidelity, with no extra training! - Steer continuous and discrete (e.g. text) diffusions - Beat larger models using less compute - Outperform fine-tuning - And keep your stats friends happy !?

Got a diffusion model?

What if there were a way to:
- Get SOTA text-to-image prompt fidelity, with no extra training!
- Steer continuous and discrete (e.g. text) diffusions
- Beat larger models using less compute
- Outperform fine-tuning
- And keep your stats friends happy !?
Machine learning for protein engineering seminar (@ml4proteins) 's Twitter Profile Photo

Next Tues (3/18) at 4PM ET we will have Alan Amin present "Bayesian Optimization of Antibodies Informed by a Generative Model of Evolving Sequences" Paper: arxiv.org/abs/2412.07763 Sign up on our website for zoom links!

Nate Gruver (@gruver_nate) 's Twitter Profile Photo

This paper is for all the discrete diffusion enthusiasts out there! We shed some light on why simple masking diffusion usually outperforms other noise processes.

Jiaxin Shi (@thjashin) 's Twitter Profile Photo

Autoregressive models are too restrictive by forcing a fixed generation order, while masked diffusion is wasteful as it fits all possible orders. Can our model dynamically decide the next position to generate based on context? Learn more in our ICML paper arxiv.org/abs/2503.05979

Autoregressive models are too restrictive by forcing a fixed generation order, while masked diffusion is wasteful as it fits all possible orders. Can our model dynamically decide the next position to generate based on context? Learn more in our ICML paper

arxiv.org/abs/2503.05979
Chaitanya K. Joshi @ICLR2025 🇸🇬 (@chaitjo) 's Twitter Profile Photo

Really interesting article by Leash Bio - I think we (I?) should hold our horses thinking drug discovery will be solved by structure prediction as a foundation model Turns out a simple architecture + really scaling binding data = v. v. strong binding affinity predictor

Really interesting article by <a href="/leashbio/">Leash Bio</a> - 

I think we (I?) should hold our horses thinking drug discovery will be solved by structure prediction as a foundation model

Turns out a simple architecture + really scaling binding data = v. v. strong binding affinity predictor