Vahab Mirrokni (@mirrokni) 's Twitter Profile
Vahab Mirrokni

@mirrokni

Google Fellow, VP | Gemini Data Area Lead | Algorithms, GraphML, ML efficiency, Economics @ Google Research. Former MSR, Amazon, MIT PhD, Sharif Univ. BSc

ID: 50554404

calendar_today25-06-2009 04:53:08

49 Tweet

1,1K Followers

72 Following

Vahab Mirrokni (@mirrokni) 's Twitter Profile Photo

I am very excited about this new paper and more generally this line of work. See Ali's detailed description of how adding this memory unit can enable transformers scale to much longer context length.

Vahab Mirrokni (@mirrokni) 's Twitter Profile Photo

A new Gemini thinking version and a new Leaderboard top score. The progress of inference-time tricks on top of foundation models for math and reasoning has been fantastic. Looking forward to an exciting 2025.

Jeff Dean (@jeffdean) 's Twitter Profile Photo

🄁Introducing Gemini 2.5, our most intelligent model with impressive capabilities in advanced reasoning and coding. Now integrating thinking capabilities, 2.5 Pro Experimental is our most performant Gemini model yet. It’s #1 on lmarena.ai (formerly lmsys.org) leaderboard. šŸ„‡

🄁Introducing Gemini 2.5, our most intelligent model with impressive capabilities in advanced reasoning and coding.

Now integrating thinking capabilities, 2.5 Pro Experimental is our most performant Gemini model yet. It’s #1 on <a href="/lmarena_ai/">lmarena.ai (formerly lmsys.org)</a> leaderboard. šŸ„‡
Jeff Dean (@jeffdean) 's Twitter Profile Photo

Gemini 2.5 Pro is unlocking really exciting use cases at the intersection of coding and math. I remember my excitement when I first learned about the Mandelbrot set as a kid. Check out the code the Gemini 2.5 Pro model wrote given the prompt ā€œp5js to explore a Mandelbrot setā€!

Vahab Mirrokni (@mirrokni) 's Twitter Profile Photo

I am very excited about the release of experimental Gemini 2.5 bearing to fruit our hard work in Gemini Data for the past several months. We are back on top of LMSys, and the new model improved a lot in coding and complex tasks. Data played a central role. Check it out.

Tianhao Wang ("Jiachen") @ICLR (@jiachenwang97) 's Twitter Profile Photo

Join us at ICLR 2026 on Monday (4/28) in Hall 4 #4 for a full-day workshop on data problems for foundation models! We'll explore research frontiers in understanding and optimizing data across all aspects of foundation model development. Schedule: datafm.github.io We

Join us at <a href="/iclr_conf/">ICLR 2026</a> on Monday (4/28) in Hall 4 #4 for a full-day workshop on data problems for foundation models! We'll explore research frontiers in understanding and optimizing data across all aspects of foundation model development. 

Schedule: datafm.github.io

We
Vahab Mirrokni (@mirrokni) 's Twitter Profile Photo

One of the big updates of today's Google I/O was this 2.5 Flash model. It's an efficient model and #2 on LMSYS only to 2.5 Pro (which is much bigger). Another milestone as a result of so much hard work and long-term collaborations across many teams. Try it out.