Hyping myself up tonight seeing that flagship LLMs from 2 of the biggest companies by market cap today use Rephrasing the Web (WRAP). Best part? Almost every (but 1 final) exp was done on small 128M LLMs with 5B tokens, now scaling beautifully to 10B LLMs on trillions of tokens🚀
It's official 🇮🇳
We're proud to announce that Sarvam has been selected by the Government of India under the IndiaAI Mission to build India's sovereign Large Language Model.
Building India's sovereign model from the ground up is a crucial step toward Atmanirbhar Bharat. The
Today we introduce Sarvam-M, a 24B open-weights hybrid model built on top of Mistral Small.
Sarvam-M achieves a new benchmark across a range of Indian languages, math, and programming tasks, for a model of its size.
Here is a detailed technical blog on how we customize