New Delhi: In a major step for India’s push to compete in advanced AI, homegrown startup Sarvam has launched a foundational 105-billion-parameter large language model (LLM), along with a range of tools and systems aimed at commercialisation. The company said the model performs on par with other open and closed models of similar size, including in agentic and tool-calling capabilities. Co-founder Pratyush Kumar said the model outperformed the 600-billion-parameter Deepseek R1 model released last year and Google’s Gemini 2.5 Flash on Indian language technical benchmarks. Kumar said this demonstrates that India can build state-of-the-art AI models from scratch. He added that Sarvam chose the model’s size carefully based on real-world use cases, and expects performance to further improve over time. In LLMs, “parameters” are numerical weights that represent the model’s learned knowledge and capabilities. For comparison, leading global models from companies like OpenAI are reportedly built with trillions of parameters. Sarvam was among the first AI startups selected under the India AI Mission last April to develop sovereign AI models. The company received govt support, including access to GPUs for training.

