Amit Sangani (@asangani7) 's Twitter Profile
Amit Sangani

@asangani7

Senior Director of Partner Eng @Meta. Prev CTO @MightyText, Ex-@Google. AI/ML, Llama, PyTorch and Gen AI developer & enthusiast.

ID: 121887943

linkhttps://llama.meta.com/ calendar_today10-03-2010 23:00:22

858 Tweet

1,1K Followers

632 Following

Amit Sangani (@asangani7) 's Twitter Profile Photo

Super excited to work with Andrew Ng and Sharon Zhou for this short course which shows how to improve accuracy of LLM applications! And it uses open source Llama 3-8b! model 🚀 to build an application that converts text to SQL with a custom schema. #Llama #LLMs

AI at Meta (@aiatmeta) 's Twitter Profile Photo

📣 Introducing Llama 3.2: Lightweight models for edge devices, vision models and more! What’s new? • Llama 3.2 1B & 3B models deliver state-of-the-art capabilities for their class for several on-device use cases — with support for Arm, MediaTek & Qualcomm on day one. •

📣 Introducing Llama 3.2: Lightweight models for edge devices, vision models and more!

What’s new?
• Llama 3.2 1B &amp; 3B models deliver state-of-the-art capabilities for their class for several on-device use cases — with support for <a href="/Arm/">Arm</a>, <a href="/MediaTek/">MediaTek</a> &amp; <a href="/Qualcomm/">Qualcomm</a> on day one.
•
Boz (@boztank) 's Twitter Profile Photo

We just unveiled Orion, our full AR glasses prototype that we’ve been working on for nearly a decade. When we started on this journey, our teams predicted that we had a 10% chance (at best) of success. This was our project to see if our dream AR glasses—wide FOV display, less

We just unveiled Orion, our full AR glasses prototype that we’ve been working on for nearly a decade. When we started on this journey, our teams predicted that we had a 10% chance (at best) of success.

This was our project to see if our dream AR glasses—wide FOV display, less
AI at Meta (@aiatmeta) 's Twitter Profile Photo

📣 New course now available on DeepLearning.AI: Introducing Multimodal Llama 3.2! The course covers both Llama 3.1 & Llama 3.2 and includes detailed rundowns on multimodal prompting, custom tool calling, Llama Stack + more. Take the 1h course for free ⬇️ bit.ly/3ZU80Ik

Amit Sangani (@asangani7) 's Twitter Profile Photo

New Llama 3.2 course is here. This will help you get started on using Llama models for image reasoning, function calling, llama-stack and orchestration, applying multimodal prompting and more... in ~1 hr. Excellent ROI :)

AI at Meta (@aiatmeta) 's Twitter Profile Photo

Today is the start of a new era of natively multimodal AI innovation. Today, we’re introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick — our most advanced models yet and the best in their class for multimodality. Llama 4 Scout • 17B-active-parameter model

Today is the start of a new era of natively multimodal AI innovation.

Today, we’re introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick —  our most advanced models yet and the best in their class for multimodality.

Llama 4 Scout
• 17B-active-parameter model
Julian Goldie SEO (@juliangoldieseo) 's Twitter Profile Photo

10 MIND-BLOWING things about Llama 4 that CRUSH the competition! 🤯   1. Outranks ChatGPT 4.0 on LM Arena leaderboard   2. Beats Grok, DeepSeek & Gemini 2.0 in benchmarks   3. Features MASSIVE 10M token context window   4. Completely FREE to use via Open Router   5. Comes in 3

Ethan Mollick (@emollick) 's Twitter Profile Photo

Llama 4 independent tests suggest a Maverick is very solid model, but not enough to beat DeepSeek v3 (non-reasoner version), though size-performance trade-offs make it hard to do exact comparisons.

Epoch AI (@epochairesearch) 's Twitter Profile Photo

We evaluated Llama 4 ourselves: On GPQA Diamond, Maverick and Scout scored 67% and 52%, similar to Meta’s reported 57% and 69.8%. On MATH Level 5, Maverick and Scout got 73% and 62%. Maverick is competitive with leading open or low-cost models, and both outperform Llama 3.

We evaluated Llama 4 ourselves:

On GPQA Diamond, Maverick and Scout scored 67% and 52%, similar to Meta’s reported 57% and 69.8%.
On MATH Level 5, Maverick and Scout got 73% and 62%.

Maverick is competitive with leading open or low-cost models, and both outperform Llama 3.
Amit Sangani (@asangani7) 's Twitter Profile Photo

🚨 No data? No problem. The new Absolute Zero framework trains LLMs to reason — with zero external data. 🤖 Self-generates tasks 🧠 Solves + verifies via code 🎯 Learns through reward signals 📄 arxiv.org/abs/2505.03335 #AI #LLM #ReinforcementLearning #AutonomousAI

Amit Sangani (@asangani7) 's Twitter Profile Photo

🔥 Want to break into AI product management? This roadmap is gold — everything from basic ML concepts to fine-tuning, RAG, agents, prototyping, and evaluation loops. 📌 Save this. Study this. #AI #ProductManagement #LLMs #PromptEngineering

🔥 Want to break into AI product management?

This roadmap is gold — everything from basic ML concepts to fine-tuning, RAG, agents, prototyping, and evaluation loops.

📌 Save this. Study this.
#AI #ProductManagement #LLMs #PromptEngineering
Andrew Ng (@andrewyng) 's Twitter Profile Photo

Introducing "Building with Llama 4." This short course is created with Meta AI at Meta, and taught by Amit Sangani, Director of Partner Engineering for Meta’s AI team. Meta’s new Llama 4 has added three new models and introduced the Mixture-of-Experts (MoE) architecture to its

Meta for Developers (@metafordevs) 's Twitter Profile Photo

Tired of manual prompt tweaking? Watch the latest Llama tutorial on how to optimize your existing GPT or other LLM prompts for Llama with `llama-prompt-ops`, the open-source Python library! In this video, Partner Engineer Justin Lee demonstrates installation, project setup,