Andrew Sellergren (@asellerg) 's Twitter Profile
Andrew Sellergren

@asellerg

Triathlete | Coach | Software Engineer @Google

ID: 53417188

linkhttp://sellergren.net calendar_today03-07-2009 15:43:21

125 Tweet

194 Followers

141 Following

Google AI (@googleai) 's Twitter Profile Photo

Announcing CT Foundation, a new medical imaging embedding tool that accepts a computed tomography (CT) volume as input and returns a small, information-rich numerical embedding that can be used to rapidly train models. Learn more and try it out yourself → goo.gle/4dYkClf

Shek Azizi (@azizishekoofeh) 's Twitter Profile Photo

Built on our work from ✨Med-Gemini✨, we're thrilled to unveil ⚡️⚡️CT Foundation⚡️⚡️, a novel AI endpoint that simplifies CT scans analysis!

Colaboratory (@googlecolab) 's Twitter Profile Photo

Our friends at Google Research used Colab to interface with their new models! Check out CT Foundation, a new medical foundation tool that helps you train models on CT scans in Colab. Read and sign up here for access: research.google/blog/taking-me…

David Barrett (@dgtbarrett) 's Twitter Profile Photo

Excited to share our work on ‘Collaboration between clinicians and vision–language models in radiology report generation’, published today Nature Medicine (1/N). nature.com/articles/s4159…

Shek Azizi (@azizishekoofeh) 's Twitter Profile Photo

This week we announced ✨𝐇𝐞𝐚𝐥𝐭𝐡 𝐀𝐈 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧𝐬 (𝐇𝐀𝐈-𝐃𝐄𝐅)✨ which is a collection of open weight models to accelerate the development of health AI for real clinical use-cases.

This week we announced ✨𝐇𝐞𝐚𝐥𝐭𝐡 𝐀𝐈 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧𝐬 (𝐇𝐀𝐈-𝐃𝐄𝐅)✨ which is a collection of open weight models to accelerate the development of health AI for real clinical use-cases.
Google Health (@googlehealth) 's Twitter Profile Photo

Exciting news from #GoogleIO! We’re introducing MedGemma, our most capable open model for multimodal medical text and image comprehension built on Gemma 3.

merve (@mervenoyann) 's Twitter Profile Photo

Google released MedGemma on I/O'25 👏 > 4B and 27B instruction fine-tuned vision LMs and a 4B pre-trained vision LM for medicine > available with transformers from the get-go 🤗 they also released a cool demo for scan reading ⤵️

Google Research (@googleresearch) 's Twitter Profile Photo

Introducing new models for research & development of health applications: MedGemma 27B Multimodal, for complex multimodal & longitudinal EHR interpretation, and MedSigLIP, a lightweight image & text encoder for classification, search, & related tasks. → goo.gle/4kvt6Uk

Introducing new models for research & development of health applications: MedGemma 27B Multimodal, for complex multimodal & longitudinal EHR interpretation, and MedSigLIP, a lightweight image & text encoder for classification, search, & related tasks. → goo.gle/4kvt6Uk
Google Health (@googlehealth) 's Twitter Profile Photo

The MedGemma collection consists of our most capable open models for health AI development. Today we're announcing two new additions to the collection: MedGemma 27B multimodal and MedSigLIP.

The MedGemma collection consists of our most capable open models for health AI development. Today we're announcing two new additions to the collection: MedGemma 27B multimodal and MedSigLIP.
Omar Sanseviero (@osanseviero) 's Twitter Profile Photo

🏥Introducing MedGemma, part 2, including: 🔥A 27B multimodal MedGemma 👀MedSigLIP, a lightweight image/text encoder for medical image retrieval/classification 📜A technical report with details Blog: research.google/blog/medgemma-… Paper: arxiv.org/abs/2507.05201

🏥Introducing MedGemma, part 2, including:

🔥A 27B multimodal MedGemma
👀MedSigLIP, a lightweight image/text encoder for medical image retrieval/classification
📜A technical report with details

Blog: research.google/blog/medgemma-…
Paper: arxiv.org/abs/2507.05201
Philipp Schmid (@_philschmid) 's Twitter Profile Photo

The Google DeepMind Gemma family of open models continues to grow! Today we launched T5Gemma a new take for encoder-decoder models and a new multimodal version of MedGemma with a specialized SigLip variant for healthcare. 🚀 T5Gemma: - Adapts Gemma 2 into flexible

The <a href="/GoogleDeepMind/">Google DeepMind</a> Gemma family of open models continues to grow! Today we launched T5Gemma a new take for encoder-decoder models and a new multimodal version of MedGemma with a specialized SigLip variant for healthcare. 🚀

T5Gemma:
- Adapts Gemma 2 into flexible
Jeff Dean (@jeffdean) 's Twitter Profile Photo

Check out our state-of-the-art open weights MedGemma multimodal model for making sense of longitudinal EHR data as well as medical text and medical imaging data in various modalities (radiology, dermatology, pathology, ophthalmology, etc.) See the blog post linked below! ⬇️

Check out our state-of-the-art open weights MedGemma multimodal model for making sense of longitudinal EHR data as well as medical text and medical imaging data in various modalities (radiology, dermatology, pathology, ophthalmology, etc.)

See the blog post linked below! ⬇️
Ethan Mollick (@emollick) 's Twitter Profile Photo

MedGemma is a really interesting model - very small, multimodal, open, and does quite well in out-of-distribution medical tasks compared to much larger models. Would love to see more work thinking about how to improve & deploy this sort of LLM to support medical professionals

MedGemma is a really interesting model - very small, multimodal, open, and does quite well in out-of-distribution medical tasks compared to much larger models. 

Would love to see more work thinking about how to improve &amp; deploy this sort of LLM to support medical professionals