Amir Yazdan (@ayazdanb) 's Twitter Profile
Amir Yazdan

@ayazdanb

Research Scientist at Google DeepMind. TPU Co-design, Machine Learning and Accelerator Design. #ComputerArchitecture #ML4Code

ID: 940311804563087360

linkhttps://www.ayazdan.com/ calendar_today11-12-2017 20:06:19

1,1K Tweet

1,1K Followers

915 Following

Sparsity in LLMs Workshop at ICLR 2025 (@sparsellms) 's Twitter Profile Photo

Sparse LLM workshop will run on Sunday with two poster sessions, a mentoring session, 4 spotlight talks, 4 invited talks and a panel session. We'll host an amazing lineup of researchers: Dan Alistarh Vithu Thangarasa Yuandong Tian Amir Yazdan Gintare Karolina Dziugaite Olivia Hsu Pavlo Molchanov Yang Yu

Sparse LLM workshop will run on Sunday with two poster sessions, a mentoring session, 4 spotlight talks, 4 invited talks and a panel session. 

We'll host an amazing lineup of researchers: <a href="/DAlistarh/">Dan Alistarh</a> <a href="/vithursant19/">Vithu Thangarasa</a> <a href="/tydsh/">Yuandong Tian</a> <a href="/ayazdanb/">Amir Yazdan</a> <a href="/gkdziugaite/">Gintare Karolina Dziugaite</a> Olivia Hsu <a href="/PavloMolchanov/">Pavlo Molchanov</a> Yang Yu
Aran Komatsuzaki (@arankomatsuzaki) 's Twitter Profile Photo

The Leaderboard Illusion - Identifies systematic issues that have resulted in a distorted playing field of Chatbot Arena - Identifies 27 private LLM variants tested by Meta in the lead-up to the Llama-4 release

The Leaderboard Illusion

- Identifies systematic issues that have resulted in a distorted playing field of Chatbot Arena

- Identifies 27 private LLM variants tested by Meta in the lead-up to the Llama-4 release
Pavan Jayasinha (@pavanjayasinha) 's Twitter Profile Photo

I implemented an LLM end-to-end in hardware, and ran it on an FPGA. Zero Python. Zero CUDA. Just pure SysVerilog. All my progress + everything I learned from 200h of LLM chip design (demo at the end)πŸ‘‡

I implemented an LLM end-to-end in hardware, and ran it on an FPGA.

Zero Python. Zero CUDA. Just pure SysVerilog.

All my progress + everything I learned from 200h of LLM chip design (demo at the end)πŸ‘‡
Stephanie Chan (@scychan_brains) 's Twitter Profile Photo

Some years ago, I got trapped in a Massive Trough of Imposter Syndrome. It took more than a year to dig myself out of it, but the following framework really helped me. It feels a bit vulnerable to share, but I hope it might help a few others too! A short thread πŸ§΅πŸ™‚

Sundar Pichai (@sundarpichai) 's Twitter Profile Photo

AlphaEvolve, our new Gemini-powered coding agent, can help engineers + researchers discover new algorithms and optimizations for open math + computer science problems.Β  We’ve used it to improve the efficiency of our data centers (recovering 0.7% of our fleet-wide compute

Lilian Weng (@lilianweng) 's Twitter Profile Photo

Giving your models more time to think before prediction, like via smart decoding, chain-of-thoughts reasoning, latent thoughts, etc, turns out to be quite effective for unblocking the next level of intelligence. New post is here :) β€œWhy we think”: lilianweng.github.io/posts/2025-05-…

Sakana AI (@sakanaailabs) 's Twitter Profile Photo

Introducing The Darwin GΓΆdel Machine: AI that improves itself by rewriting its own code sakana.ai/dgm The Darwin GΓΆdel Machine (DGM) is a self-improving agent that can modify its own code. Inspired by evolution, we maintain an expanding lineage of agent variants,

Introducing The Darwin GΓΆdel Machine: AI that improves itself by rewriting its own code

sakana.ai/dgm

The Darwin GΓΆdel Machine (DGM) is a self-improving agent that can modify its own code. Inspired by evolution, we maintain an expanding lineage of agent variants,