Saro (@pas_saro) 's Twitter Profile
Saro

@pas_saro

AI4Science @MIT • Former Maths @Cambridge_Uni, @AIatMeta, @GRESEARCHjobs

ID: 1266083760317087746

calendar_today28-05-2020 19:08:10

19 Tweet

409 Followers

150 Following

Gabriele Corso (@gabricorso) 's Twitter Profile Photo

Thrilled to announce Boltz-1, the first open-source and commercially available model to achieve AlphaFold3-level accuracy on biomolecular structure prediction! An exciting collaboration with Jeremy Wohlwend, Saro and an amazing team at MIT and Genesis Therapeutics. A thread!

Brian Naughton (@btnaughton) 's Twitter Profile Photo

OMG is this a complete AF3-like model that's actually open, not weird pretend open like most of the others?? (I believe Ligo is open but no model weights yet?) "training and inference code, model weights, and datasets" are MIT licensed

Brett Adcock (@adcock_brett) 's Twitter Profile Photo

MIT and Genesis Therapeutics released Boltz-1, an open-source model matching AlphaFold3's accuracy in predicting biological structures Available for researchers worldwide under MIT license x.com/gabricorso/sta…

Jeremy Wohlwend (@jeremywohlwend) 's Twitter Profile Photo

We’re excited to release another update to the Boltz repo: v0.3.0. This release includes several important features, including our confidence model and low memory mode. Give it a try! github.com/jwohlwend/boltz

Gabriele Corso (@gabricorso) 's Twitter Profile Photo

New major update to Boltz-1 integrating the confidence model and memory-efficient inference! Happy Thanksgiving! 🤗🦃 github.com/jwohlwend/boltz

Gabriele Corso (@gabricorso) 's Twitter Profile Photo

Excited to unveil Boltz-2, our new model capable not only of predicting structures but also binding affinities! Boltz-2 is the first AI model to approach the performance of FEP simulations while being more than 1000x faster! All open-sourced under MIT license! A thread… 🤗🚀

Gabriele Corso (@gabricorso) 's Twitter Profile Photo

Scalable computational binding affinity prediction is a crucial and long-standing scientific challenge. Physics-based methods like FEP are accurate but slow and expensive. Docking is fast but noisy. Deep learning models haven’t matched the reliability of FEP—until now.

Scalable computational binding affinity prediction is a crucial and long-standing scientific challenge. Physics-based methods like FEP are accurate but slow and expensive. Docking is fast but noisy. Deep learning models haven’t matched the reliability of FEP—until now.
Nima Alidoust (@nalidoust) 's Twitter Profile Photo

Hailing from quantum chemistry and physics-based modeling, this feels like a seminal moment. And one that I would not have predicted 10 years ago. Kudos! and Kudos for being on the side of openness.

C&EN (Chemical & Engineering News) (@cenmag) 's Twitter Profile Photo

A team led by Regina Barzilay, a computer science professor at Massachusetts Institute of Technology (MIT), has launched Boltz-2, an algorithm that unites protein folding and prediction of small-molecule binding affinity in one package. cen.acs.org/pharmaceutical…

Gabriele Corso (@gabricorso) 's Twitter Profile Photo

For those already using Boltz-2 affinity prediction, we realized that there was a bit of confusion around the different outputs from the models and in what contexts each should be used. We've added more details in the docs. A summary below.

Anthony Costa (@anthonycosta) 's Twitter Profile Photo

Absolutely thrilled to announce the availability of cuEquivariance v0.5 and our contributions to Boltz-2! cuEquivariance v0.5 is a huge release -- now including accelerated triangle attention and multiplication kernels, fundamental to performance of next-gen geometry-aware NNs.

Jeremy Wohlwend (@jeremywohlwend) 's Twitter Profile Photo

The team at @NVIDIA has done such amazing work accelerating Boltz-2 through novel CUDA kernels and deploying Boltz-2 as NVIDIA’s NIM! The kernels are live on the Boltz repo, and you can run the model with 2x training & inference speedup and large memory savings!🧵#cuEquivariance

The team at @NVIDIA has done such amazing work accelerating Boltz-2 through novel CUDA kernels and deploying Boltz-2 as NVIDIA’s NIM! The kernels are live on the Boltz repo, and you can run the model with 2x training & inference speedup and large memory savings!🧵#cuEquivariance
NVIDIA Healthcare (@nvidiahealth) 's Twitter Profile Photo

Did you know the NVIDIA #cuEquivariance library can now accelerate Triangle Attention and Triangle Multiplication operations? Say goodbye to AI model bottlenecks — get up to 5x speedups in training and inference to build and train bigger models. We’re excited that the next-gen

Did you know the NVIDIA #cuEquivariance library can now accelerate Triangle Attention and Triangle Multiplication operations?

Say goodbye to AI model bottlenecks — get up to 5x speedups in training and inference to build and train bigger models.

We’re excited that the next-gen
Recursion (@recursionpharma) 's Twitter Profile Photo

Open science, activated. Since the release of Boltz-2 last Friday – the new open-source protein structure and protein binding affinity model from Massachusetts Institute of Technology (MIT) and Recursion – we’ve been introducing the model to the broader community and the reception has been terrific. 🔹At #GTCParis