Gus (๐Ÿค–๐Ÿง +๐Ÿ+๐Ÿฅ‘๐Ÿ—ฃ๏ธ) (@gusthema) 's Twitter Profile
Gus (๐Ÿค–๐Ÿง +๐Ÿ+๐Ÿฅ‘๐Ÿ—ฃ๏ธ)

@gusthema

AI Developer Advocate @google

- Gemma ๐Ÿ’Ž
- Machine Learning ๐Ÿค–๐Ÿง 
- Google AI โš™๏ธ๐Ÿง 
- DevRel ๐Ÿฅ‘๐Ÿ—ฃ๏ธ

find me also at: linktr.ee/gusthema

ID: 331788882

calendar_today08-07-2011 18:28:10

13,13K Tweet

22,22K Followers

1,1K Following

Tris Warkentin (@triswarkentin) 's Twitter Profile Photo

Gemma 2 just got even better! ๐Ÿš€ New Japanese-tuned 2B model AND a $150K Kaggle competition to build Gemma models for every language. Great to have Sundar Pichai here to share the excitement! Read more: goo.gle/Gemma4Japan #GemmaDeveloperDay

Gemma 2 just got even better! ๐Ÿš€ New Japanese-tuned 2B model AND a $150K Kaggle competition to build Gemma models for every language. Great to have <a href="/sundarpichai/">Sundar Pichai</a> here to share the excitement! 

Read more: goo.gle/Gemma4Japan 

#GemmaDeveloperDay
Jeff Dean (@jeffdean) 's Twitter Profile Photo

Congrats to my colleagues Demis Hassabis & John Jumper for winning the Nobel Prize in chemistry for their awesome AlphaFold work. AlphaFold has had a dramatic impact across many fields including medicine, drug development & more. And yes, this is my second Nobel-congratulating

Google for Developers (@googledevs) 's Twitter Profile Photo

Give a warm welcome to the @HuggingFace Ovis 1.6 Gemma 2 9B vision language model, one of the most recent additions to the #Gemmaverse. Check out this small but mighty vision model in this Gradio space for yourself โ†’ goo.gle/3YktqNB

Gus (๐Ÿค–๐Ÿง +๐Ÿ+๐Ÿฅ‘๐Ÿ—ฃ๏ธ) (@gusthema) 's Twitter Profile Photo

This Thursday I'm presenting at the AI Engineer London Meetup event I'll talk about Gemma models (of course) Come chat and have fun! lu.ma/scxafc58?tk=ymโ€ฆ

GDG London (@gdg_london) 's Twitter Profile Photo

๐Ÿ”Šโœจ Speaker Announcement โœจ๐Ÿ”Š We're excited to welcome Gus Martins(Gus (๐Ÿค–๐Ÿง +๐Ÿ+๐Ÿฅ‘๐Ÿ—ฃ๏ธ)), Google AI DevRel Engineer, to #DevFestLondon2024! Gus will be giving a workshop on #Gemma, introducing the world of open models and AI ๐Ÿง ๐Ÿช„ Learn how to harness Gemma's capabilities for text

Gus (๐Ÿค–๐Ÿง +๐Ÿ+๐Ÿฅ‘๐Ÿ—ฃ๏ธ) (@gusthema) 's Twitter Profile Photo

Very good post!!! Gemma 2 2B showing great results (as expected) always make me happy! check out Andre's blog post and code, maybe add more models to the benchmark?

Gus (๐Ÿค–๐Ÿง +๐Ÿ+๐Ÿฅ‘๐Ÿ—ฃ๏ธ) (@gusthema) 's Twitter Profile Photo

I'm very excited about this event! I'll present a Gemma workshop to give you a good start and help you with ideas! Come join us in person if you can (you won't regret!) but if you can't, you can follow the workshop online and participate in the hackathon from wherever you are!

Prince Canuma (@prince_canuma) 's Twitter Profile Photo

mlx-vlm v0.1.4 is here ๐ŸŽ‰ New models: - Google DeepMind Paligemma 2 Up next ๐Ÿšง: - Refactoring Get started: > pip install -U mlx-vlm Please leave us a star and send a PR :)

mlx-vlm v0.1.4 is here ๐ŸŽ‰

New models:
- <a href="/GoogleDeepMind/">Google DeepMind</a> Paligemma 2

Up next ๐Ÿšง:
- Refactoring  

Get started:
&gt; pip install -U mlx-vlm 

Please leave us a star and send a PR :)
SkalskiP (@skalskip92) 's Twitter Profile Photo

we are so back! finally, PaliGemma2 for LaTeX OCR - used google/paligemma2-10b-pt-224 checkpoint - trained on A100 with 40GB VRAM - 4h of training on 20k training examples - QLoRA with 4bit quantization colab with complete fine-tuning code: colab.research.google.com/github/robofloโ€ฆ

we are so back! finally, PaliGemma2 for LaTeX OCR

- used google/paligemma2-10b-pt-224 checkpoint
- trained on A100 with 40GB VRAM
- 4h of training on 20k training examples
- QLoRA with 4bit quantization

colab with complete fine-tuning code: colab.research.google.com/github/robofloโ€ฆ
Gus (๐Ÿค–๐Ÿง +๐Ÿ+๐Ÿฅ‘๐Ÿ—ฃ๏ธ) (@gusthema) 's Twitter Profile Photo

This is so cool! ๐Ÿ’Ž + ๐ŸŽค = ๐Ÿคฉ I liked how this is focused in the on-device story and I can see many interesting use cases when you can talk directly to the model

Matt Silverlock ๐Ÿ€ (@elithrar) 's Twitter Profile Photo

Gemma 2.0 27B is solid for code. It's definitely not as fast as Gemini 2.0 Flash, and not as good as 2.0 Pro, but definitely better than Llama 3.3 for my (heavily subjective) test suite and outputs.

Gemma 2.0 27B is solid for code. It's definitely not as fast as Gemini 2.0 Flash, and not as good as 2.0 Pro, but definitely better than Llama 3.3 for my (heavily subjective) test suite and outputs.
Sundar Pichai (@sundarpichai) 's Twitter Profile Photo

Gemma 3 is here! Our new open models are incredibly efficient - the largest 27B model runs on just one H100 GPU. You'd need at least 10x the compute to get similar performance from other models โฌ‡๏ธ

Gemma 3 is here! Our new open models are incredibly efficient - the largest 27B model runs on just one H100 GPU. You'd need at least 10x the compute to get similar performance from other models โฌ‡๏ธ
Florent Daudens (@fdaudens) 's Twitter Profile Photo

๐Ÿคฏ Gemma 3's image analysis blew me away! Tested 2 ways to extract airplane registration numbers from photos with 12B model: 1๏ธโƒฃ Gradio app w/API link (underrated feature IMO) + ZeroGPU infra on Hugging Face in Google Colab. Fast & free. 2๏ธโƒฃ LM Studio server + local processing

Google AI Developers (@googleaidevs) 's Twitter Profile Photo

See whatโ€™s new in Gemma 3, the latest generation of open models from Google with sizes ranging from 1B to 27B parameters. These versatile models run quickly and efficiently, and scale to developer needs to provide project flexibility.

Sundar Pichai (@sundarpichai) 's Twitter Profile Photo

Just announced new versions of Gemma 3 โ€“ the most capable model to run just one H100 GPU โ€“ can now run on just one *desktop* GPU! Our Quantization-Aware Training (QAT) method drastically brings down memory use while maintaining high quality. Excited to make Gemma 3 even more

Just announced new versions of Gemma 3 โ€“ the most capable model to run just one H100 GPU โ€“ can now run on just one *desktop* GPU!

Our Quantization-Aware Training (QAT) method drastically brings down memory use while maintaining high quality. Excited to make Gemma 3 even more