Shervine Amidi (@shervinea) 's Twitter Profile
Shervine Amidi

@shervinea

Ecole Centrale Paris, Stanford University. Twin of @afshinea. New book on Transformers & Large Language Models at superstudy.guide.

ID: 446966253

linkhttps://amidi.ai/shervine calendar_today26-12-2011 10:33:25

258 Tweet

3,3K Followers

85 Following

Afshine Amidi (@afshinea) 's Twitter Profile Photo

Announcing "Super Study Guide: Transformers & Large Language Models", a 250-page book with ~600 colored illustrations covering the concepts of the Stanford workshop that Shervine and I are teaching at this summer.

Announcing "Super Study Guide: Transformers & Large Language Models", a 250-page book with ~600 colored illustrations covering the concepts of the Stanford workshop that Shervine and I are teaching at this summer.
Victor Sanh (@sanhestpasmoi) 's Twitter Profile Photo

I’m enjoying reading through Afshine Amidi & Shervine Amidi ‘s latest book on Transformers and LLMs. It’s a good balance between math formalizations and application of the latest techniques in foundational models! And the graphs are beautiful 🖌️

I’m enjoying reading through <a href="/afshinea/">Afshine Amidi</a> &amp; <a href="/shervinea/">Shervine Amidi</a> ‘s latest book on Transformers and LLMs.

It’s a good balance between math formalizations and application of the latest techniques in foundational models! And the graphs are beautiful 🖌️
Jeff Dean (@jeffdean) 's Twitter Profile Photo

This book is really nicely illustrated! Nearly every page has multiple figures/diagrams that help explain the underlying concepts behind transformers & LLMs, including embeddings, attention, LoRA, distillation, quantization, ... Nice work, Afshine Amidi and Shervine Amidi !

Shaw Talebi (@shawhint) 's Twitter Profile Photo

Although there are more learning resources for AI and ML than ever, finding high-quality ones can still be challenging. Here are two I’m working through now 👇 1) Super Study Guide: Transformers & Large Language Models conceptually builds LLMs from the ground up. It also has

Although there are more learning resources for AI and ML than ever, finding high-quality ones can still be challenging.

Here are two I’m working through now 👇 

1) Super Study Guide: Transformers &amp; Large Language Models conceptually builds LLMs from the ground up. 

It also has
Kirk Borne (@kirkdborne) 's Twitter Profile Photo

New book "Super Study Guide: Transformers & Large Language Models" by Afshine Amidi and Shervine Amidi: amzn.to/3SW6YYm Beautifully presented, excellent content, timely, thorough, educational, and a result of great dedication. Shervine and Afshine tell me that they started

New book "Super Study Guide: Transformers &amp; Large Language Models" by <a href="/afshinea/">Afshine Amidi</a> and <a href="/shervinea/">Shervine Amidi</a>: amzn.to/3SW6YYm 

Beautifully presented, excellent content, timely, thorough, educational, and a result of great dedication.

Shervine and Afshine tell me that they started
elvis (@omarsar0) 's Twitter Profile Photo

Really enjoying this visual study guide on Transformers and LLMs. It contains a very concise overview of the key concepts in Transformers and LLMs. Topics range from embeddings to attention mechanism to post-training techniques. Thanks for the great book Shervine Amidi and

Really enjoying this visual study guide on Transformers and LLMs. 

It contains a very concise overview of the key concepts in Transformers and LLMs. Topics range from embeddings to attention mechanism to post-training techniques. 

Thanks for the great book <a href="/shervinea/">Shervine Amidi</a> and
Louis-François Bouchard 🎥🤖 (@whats_ai) 's Twitter Profile Photo

If you are looking to have a better understanding of LlMs and how they (really) work, do yourself a favor and get this book by Afshine Amidi and Shervine Amidi. It is amazing! It covers all the inportant concepts behind the Transformer architecture with deep learning foundations,

If you are looking to have a better understanding of LlMs and how they (really) work, do yourself a favor and get this book by <a href="/afshinea/">Afshine Amidi</a> and <a href="/shervinea/">Shervine Amidi</a>. It is amazing!

It covers all the inportant concepts behind the Transformer architecture with deep learning foundations,
Dan | Machine Learning Engineer (@dankornas) 's Twitter Profile Photo

Transformers and LLMs have revolutionized AI by dramatically improving how machines understand and generate language. They excel at capturing context and meaning in text, enabling breakthroughs in applications like chatbots, translation, coding, and creative content generation.

Transformers and LLMs have revolutionized AI by dramatically improving how machines understand and generate language. 

They excel at capturing context and meaning in text, enabling breakthroughs in applications like chatbots, translation, coding, and creative content generation.
Afshine Amidi (@afshinea) 's Twitter Profile Photo

「Super Study Guide: Transformer と大規模言語モデル」日本語版が発売されました。250 ページにわたり約 600 点のカラーの図を交えて Transformer と LLM の詳細を解説しています。

Afshine Amidi (@afshinea) 's Twitter Profile Photo

This Spring, my twin brother Shervine Amidi and I will be teaching a new class at Stanford called "Transformers & Large Language Models" (CME 295). The goal of this class is to understand where LLMs come from, how they are trained, and where they are most used. We will also explore

This Spring, my twin brother <a href="/shervinea/">Shervine Amidi</a> and I will be teaching a new class at Stanford called "Transformers &amp; Large Language Models" (CME 295).

The goal of this class is to understand where LLMs come from, how they are trained, and where they are most used. We will also explore
Afshine Amidi (@afshinea) 's Twitter Profile Photo

Announcing the VIP Cheatsheet for Stanford's CME 295 Transformers & Large Language Models class. Topics include: - Transformers: self-attention, architecture, variants, optimization techniques (sparse attention, low-rank attention, flash attention) - LLMs: prompting, finetuning

Announcing the VIP Cheatsheet for Stanford's CME 295 Transformers &amp; Large Language Models class.

Topics include:
- Transformers: self-attention, architecture, variants, optimization techniques (sparse attention, low-rank attention, flash attention)
- LLMs: prompting, finetuning
Kazunori Sato (@kazunori_279) 's Twitter Profile Photo

Geminiの開発エンジニアが書いたスタンフォード講義資料「Transformer と大規模言語モデル」の日本語版 by Yoshiyuki Nakai 中井喜之 これはがっつり読みたい。書籍紹介ページ superstudy.guide/transformer-da… と、無償のチートシートもある。github.com/afshinea/stanf…

Geminiの開発エンジニアが書いたスタンフォード講義資料「Transformer と大規模言語モデル」の日本語版 by <a href="/yoshiyukinakai/">Yoshiyuki Nakai 中井喜之</a> これはがっつり読みたい。書籍紹介ページ superstudy.guide/transformer-da… と、無償のチートシートもある。github.com/afshinea/stanf…
Sumanth (@sumanth_077) 's Twitter Profile Photo

Stanford University released the best cheatsheets you'll ever find to learn LLMs & Transformers! These cheatsheets cover: • Self-attention, Flash Attention, LoRA, SFT • Mixture of Experts, Distillation, Quantization • RAG, Agents, LLM-as-a-judge 100% Free and Open Source

Stanford University released the best cheatsheets you'll ever find to learn LLMs &amp; Transformers!

These cheatsheets cover:

• Self-attention, Flash Attention, LoRA, SFT
• Mixture of Experts, Distillation, Quantization
• RAG, Agents, LLM-as-a-judge

100% Free and Open Source