Tom Zahavy (@tzahavy) 's Twitter Profile
Tom Zahavy

@tzahavy

Building agents that discover knowledge and get better at doing so over time. Staff research scientist @GoogleDeepMind

ID: 1070655917920829440

linkhttps://tomzahavy.wixsite.com/zahavy calendar_today06-12-2018 12:27:17

435 Tweet

2,2K Followers

397 Following

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

Last year, we introduced Project Astra: a research prototype exploring capabilities for a universal AI assistant. 🤝 We’ve been making it even better with improved voice output, memory and computer control - so it can be more personalized and proactive. Take a look ↓ #GoogleIO

Jean Tarbouriech (@jean_tarbou) 's Twitter Profile Photo

1000+ words per second! ⚡ We just unleashed Gemini Diffusion at #GoogleIO! 🚀 Awesome being part of the team that took this from a small research project all the way to I/O Google DeepMind 🪐

Marily Nika (@marilynika) 's Twitter Profile Photo

🚨 Top AI announcements on Google I/O (& how to access them) ✨ 𝗔𝗴𝗲𝗻𝘁 𝗠𝗼𝗱𝗲 - combines features like live web browsing, in-depth research and smart integrations with your Google apps, empowering it to manage complex, multi-step tasks from start to finish with minimal

Nenad Tomasev (@weballergy) 's Twitter Profile Photo

Super excited for an incredible set of updates at this year’s Google I/O showcasing not only substantial improvements in the capabilities of the base Gemini models, but, perhaps even more importantly, the progress in the development of agentic capabilities.

Tom Zahavy (@tzahavy) 's Twitter Profile Photo

So many highlights on the Google I/O 🧑‍🍳🧑‍🍳🧑‍🍳 🧐AI Search youtu.be/sxUBThVQLjU?si… blog.google/products/searc… 🧙Gemini Diffusion blog.google/technology/goo… 🧑‍💻Jules, your asynchronous coding agent blog.google/technology/goo…

Brendan O'Donoghue (@bodonoghue85) 's Twitter Profile Photo

Excited to share what my team has been working on lately - Gemini diffusion! We bring diffusion to language modeling, yielding more power and blazing speeds! 🚀🚀🚀 Gemini diffusion is especially strong at coding. In this example the model generates at 2000 tokens/sec,

Sebastian Flennerhag (@flennerhag) 's Twitter Profile Photo

Excited to share what we've been cooking - Gemini Diffusion!⚡️ Super proud of the team - cracking text diffusion was never a given but now the door is open for new capabilities and unparalleled speed. You can experience vibe-coding in real time here: deepmind.google/models/gemini-…

Google DeepMind (@googledeepmind) 's Twitter Profile Photo

We're thrilled to announce SignGemma, our most capable model for translating sign language into spoken text. 🧏 This open model is coming to the Gemma model family later this year, opening up new possibilities for inclusive tech. Share your feedback and interest in early

Ethan Mollick (@emollick) 's Twitter Profile Photo

You can make LLMs more creative by training them on human "creativity signals" (novelty, diversity, surprise, quality). Result: Even small models score higher on all 4 creativity dimensions simultaneously. Looks like we can optimize AI for creativity just like any other metric

You can make LLMs more creative by training them on human "creativity signals" (novelty, diversity, surprise, quality). Result: Even small models score higher on all 4 creativity dimensions simultaneously.

Looks like we can optimize AI for creativity just like any other metric
Francesco Orabona (@bremen79) 's Twitter Profile Photo

As promised, we put on Arxiv the proof we did with Gemini. arxiv.org/pdf/2505.20219 This shows that the Polyak stepsize not only will not reach the optimum, but it can cycle, when used without the knowledge of f*. Gemini failed when prompted directly ("Find an example where the

As promised, we put on Arxiv the proof we did with Gemini. arxiv.org/pdf/2505.20219

This shows that the Polyak stepsize not only will not reach the optimum, but it can cycle, when used without the knowledge of f*.

Gemini failed when prompted directly ("Find an example where the
Lean (@leanprover) 's Twitter Profile Photo

🔥 Google DeepMind just dropped their "formal conjectures" project - formalizing statements of math's biggest unsolved mysteries in #LeanLang and #Mathlib! This Google-backed project is a HUGE step toward developing "a much richer dataset of formalized conjectures", valuable

🔥 <a href="/GoogleDeepMind/">Google DeepMind</a>  just dropped their "formal conjectures" project - formalizing statements of math's biggest unsolved mysteries in #LeanLang and #Mathlib!

This Google-backed project is a HUGE step toward developing "a much richer dataset of formalized conjectures", valuable
Chubby♨️ (@kimmonismus) 's Twitter Profile Photo

Where it all began: In 2016, AlphaGO won against grandmaster Lee Sedol and showed with “Move 37” that it could outmaneuver so creatively that it amazed everyone. It was the first time that an artificial intelligence had made such a media-effective appearance and amazed the

Ionatan Kuperwajs (@ikuperwajs) 's Twitter Profile Photo

Very excited to be hosting a workshop at #RLDM2025 this Thursday along with Evan Russek! We have an amazing lineup of speakers, with the goal being to discuss how chess can act as a bridge between human cognition and artificial intelligence ♟️ sites.google.com/nyu.edu/chessr…

Machine Learning Street Talk (@mlstreettalk) 's Twitter Profile Photo

MLST is looking for freelance technical researchers/writers to join our team. Do you want to be part of one of the most exciting media companies in the AI space, covering exciting developments but with characteristic MLST-style substance? Email us at [email protected] with

MLST is looking for freelance technical researchers/writers to join our team.

Do you want to be part of one of the most exciting media companies in the AI space, covering exciting developments but with characteristic MLST-style substance?  

Email us at freelancers@mlst.ai with