Tris Warkentin (@triswarkentin) 's Twitter Profile
Tris Warkentin

@triswarkentin

Tech builder, Google DeepMind Product Management Director, and all-around happy guy. Launched Gemma, Bard, Imagen, and many other neat AI things.

ID: 111053528

calendar_today03-02-2010 17:12:46

110 Tweet

938 Followers

128 Following

ollama (@ollama) 's Twitter Profile Photo

🤯 Gemma 3 is available on Ollama! Multimodal is here for Gemma. 1B (text-only): ollama run gemma3:1b 4B: ollama run gemma3:4b 12B: ollama run gemma3:12b 27B: ollama run gemma3:27b

🤯 Gemma 3 is available on Ollama! Multimodal is here for Gemma. 

1B (text-only): 
ollama run gemma3:1b

4B: 
ollama run gemma3:4b 

12B: 
ollama run gemma3:12b 

27B: 
ollama run gemma3:27b
Logan Kilpatrick (@officiallogank) 's Twitter Profile Photo

Gemma 3 (our open weight LLM) is here and for the first time available on both Google AI Studio and the Gemini API! It is also: - Natively multimodal - Long context (128K tokens) - Can run on a single H100

Gemma 3 (our open weight LLM) is here and for the first time available on both Google AI Studio and the Gemini API! It is also:

- Natively multimodal
- Long context (128K tokens)
- Can run on a single H100
👩‍💻 Paige Bailey (@dynamicwebpaige) 's Twitter Profile Photo

🚀 Love that Google is investing not just in proprietary models – but also in small, highly-efficient open models that can be run locally, or on a single accelerator. Everyone deserves to be able to access these capabilities, and to customize models for their business:

NVIDIA AI Developer (@nvidiaaidev) 's Twitter Profile Photo

Thank you to Google DeepMind for letting us be a part of their Gemma Dev Day. 🙌 ICYMI: Gemma 3 is a family of lightweight models with multimodal and multilingual capability that are ready to run on all your NVIDIA hardware. Our very own Asier Arranz demoed how to run Gemma on

Thank you to <a href="/GoogleDeepMind/">Google DeepMind</a> for letting us be a part of their Gemma Dev Day. 🙌

ICYMI: Gemma 3 is a family of lightweight models with multimodal and multilingual capability that are ready to run on all your NVIDIA hardware. Our very own <a href="/asierarranz/">Asier Arranz</a> demoed how to run Gemma on
Omar Sanseviero (@osanseviero) 's Twitter Profile Photo

Introducing the Gemma package, a minimalistic library to use and fine-tune Gemma 🔥 Including docs on: - Fine-tuning - Sharding - LoRA - PEFT - Multimodality - Tokenization !pip install gemma gemma-llm.readthedocs.io/en/latest/

Introducing the Gemma package, a minimalistic library to use and fine-tune Gemma 🔥

Including docs on:
- Fine-tuning
- Sharding
- LoRA
- PEFT
- Multimodality
- Tokenization

!pip install gemma
gemma-llm.readthedocs.io/en/latest/
Kyle Corbitt (@corbtt) 's Twitter Profile Photo

If you're fine-tuning LLMs, Gemma 3 is the new 👑 and it's not close. Gemma 3 trounces Qwen/Llama models at every size! - Gemma 3 4B beats 7B/8B competition - Gemma 3 27B matches 70B competiton Vision benchmarks coming soon!

If you're fine-tuning LLMs, Gemma 3 is the new 👑 and it's not close. Gemma 3 trounces Qwen/Llama models at every size!
 - Gemma 3 4B beats 7B/8B competition
 - Gemma 3 27B matches 70B competiton

Vision benchmarks coming soon!
Omar Sanseviero (@osanseviero) 's Twitter Profile Photo

Gemma 3 just got ANOTHER point in LMSYS 🤯 A 27B open-access model with score of 1341 developers.googleblog.com/en/introducing…

Gemma 3 just got ANOTHER point in LMSYS 🤯

A 27B open-access model with score of 1341

developers.googleblog.com/en/introducing…
Akshay 🚀 (@akshay_pachaar) 's Twitter Profile Photo

Python library to fine-tune Gemma 3! 🔥 Gemma is a minimal library for using and fine-tuning Gemma. Includes docs on fine-tuning, sharding, LoRA, PEFT, multimodality and tokenization in LLMs. 100% open-source.

Python library to fine-tune Gemma 3! 🔥

Gemma is a minimal library for using and fine-tuning Gemma. Includes docs on fine-tuning, sharding, LoRA, PEFT, multimodality and tokenization in LLMs.

100% open-source.
Ben Burtenshaw (@ben_burtenshaw) 's Twitter Profile Photo

NEW MODEL: GemmaCoder3-12b is a code reasoning model that improves performance on the LiveCodeBench benchmark 11 points over the base model. This makes for a useful code model because: - At 8 bit it runs nicely on 32gb of RAM - Gemma3's 128k context length is great for large

Tris Warkentin (@triswarkentin) 's Twitter Profile Photo

Probably the most fun project I've ever been a part of 🐬 -- this one's been cooking for a while! Very excited to see how it works in actual in-ocean dolphin testing =)

Demis Hassabis (@demishassabis) 's Twitter Profile Photo

In the future soon, we will be able to communicate with many intelligent animal species - can't wait to better understand what my dog🐶is saying! Congrats to the DolphinGemma team building on our Gemma models - the most powerful single GPU/TPU open source models out there!

Glenn Cameron Jr (@glenncameronjr) 's Twitter Profile Photo

The leading model runnable on a single cloud H100 GPU now fits on a single home GPU! 🔥 We've optimized Gemma 3 27B with QAT so you can run our best-in-class open model on your desktop RTX 3090 or similar. See how easy it is to try via ollama! 👇

Sundar Pichai (@sundarpichai) 's Twitter Profile Photo

Just announced new versions of Gemma 3 – the most capable model to run just one H100 GPU – can now run on just one *desktop* GPU! Our Quantization-Aware Training (QAT) method drastically brings down memory use while maintaining high quality. Excited to make Gemma 3 even more

Just announced new versions of Gemma 3 – the most capable model to run just one H100 GPU – can now run on just one *desktop* GPU!

Our Quantization-Aware Training (QAT) method drastically brings down memory use while maintaining high quality. Excited to make Gemma 3 even more
Philipp Schmid (@_philschmid) 's Twitter Profile Photo

New lmarena.ai leaderboard update includes new Gemma 3 sizes: 👀 - Gemma-3-27B (1341) ~ Qwen3-235B-A22B (1342) - Gemma-3-12B (1321) ~ DeepSeek-V3-685B-37B (1318) - Gemma-3-4B (1272) ~ Llama-4-Maverick-17B-128E (1270) The Google DeepMind Gemma team cooked. 🧑‍🍳 Time to work on

New <a href="/lmarena_ai/">lmarena.ai</a> leaderboard update includes new Gemma 3 sizes: 👀 

- Gemma-3-27B (1341) ~ Qwen3-235B-A22B (1342) 
- Gemma-3-12B (1321) ~ DeepSeek-V3-685B-37B (1318) 
- Gemma-3-4B (1272) ~ Llama-4-Maverick-17B-128E (1270)

The <a href="/GoogleDeepMind/">Google DeepMind</a> Gemma team cooked. 🧑‍🍳 Time to work on
Steren (@steren) 's Twitter Profile Photo

🚀Today at #GoogleIO, we're making AI deployments easier by introducing 3 new ways to deploy your apps to Cloud Run: 1⃣ Deploy AI Studio apps to Cloud Run in a click 2⃣ Deploy Gemma 3 to Cloud Run in a click from AI Studio 3⃣ Deploy code to Cloud Run from any MCP client with a

🚀Today at #GoogleIO, we're making AI deployments easier by introducing 3 new ways to deploy your apps to Cloud Run:
1⃣ Deploy AI Studio apps to Cloud Run in a click
2⃣ Deploy Gemma 3 to Cloud Run in a click from AI Studio
3⃣ Deploy code to Cloud Run from any MCP client with a
Tris Warkentin (@triswarkentin) 's Twitter Profile Photo

This is my favorite demo of Gemma 3n -- multimodal live video understanding and intelligence, locally on your phone 🤯! This was only possible with the peak of foundation models at I/O last year -- the Astra demo -- the progress of small models is incredible