Pune Media

Will test DeepSeek ‘on our servers, while developing own AI model’

In a significant move to bolster its artificial intelligence (AI) capabilities, India has decided to host China’s open-source AI model, DeepSeek, on local servers, even as it accelerates efforts to develop its own large language model (LLM) within the next ten months. The two-pronged approach was announced by Ashwini Vaishnaw, Minister of Electronics and Information Technology, on Thursday, just days after China’s DeepSeek unveiled its low-cost AI model that has shaken up the tech world.

“The good thing is that DeepSeek is an open-source model, and we are very soon going to host DeepSeek on Indian servers, the way we have hosted Llama (a large language model/generative AI model developed by Meta AI) on Indian servers. Everything that is open source can be hosted on our servers so that data privacy parameters can be tested,” the minister said, assuaging concerns about potential misuse of data on the new Chinese AI platform.

However, India’s long-term goal remains the development of its own LLM. This effort will be powered by the newly established IndiaAI Compute Facility, which has secured 18,600 graphics processing unit (GPUs), including 15,000 high-end processors. GPUs are critical for the development of advanced AI models.

“We will have a world-class foundational model in the next few months. DeepSeek AI was trained on 2,000 GPUs, ChatGPT was trained on 25,000 GPUs, and we now have 15,000 high-end GPUs available. India now has a robust compute facility that will support our AI ambitions,” Vaishnaw said while unveiling a new framework for developing the AI model.

Affordable AI Compute Facility

The GPUs will be made available at a fraction of global cost benchmarks, Vaishnaw assured, adding that the compute facility will be “the most affordable,” coming in at significantly less than $1 per hour with a 40 percent government subsidy.

  • Also read: DeepSeek: What it means for the future of AI

“The average rate will be ₹115.85 per GPU hour, compared to the global benchmark of $2.5-$3 per GPU hour for accessing low-end GPUs. For high-end compute, the average rate is ₹150 per GPU hour, and we will be providing a 40 percent subsidy on this compute price. This means that for students, researchers, startups, and academia, compute will be available for less than ₹100 per GPU hour,” Vaishnaw added.

The Centre is also open to offering direct funding through milestone-based grants for model development, as well as equity-based funding, wherein IndiaAI may take equity investments in the company selected to build the LLM model.

AI safety: A key priority

To address concerns around AI safety, India is establishing eight institutions on a hub-and-spoke model, where multiple institutions can collaborate to develop tools, frameworks, and processes for AI safety. The projects approved for AI safety include areas of machine unlearning (IIT- Jodhpur), Synthetic Data Generation (IIT-Roorkee), AI bias mitigation strategy, explainable AI framework (Defence Institute of Advanced Technology, Pune), privacy enhancing strategy (IIT-Delhi, IIIT-Delhi and TEC), AI Ethical certification framework, AI algorithm auditing tool, and AI governance testing framework IIT-Jodhpur, IIT-Roorkee, IIT-Delhi. These initiatives aim to ensure that as AI develops, safety concerns are addressed in parallel.

“India will definitely play a major role because we have very strong software capabilities and a strong innovation ecosystem,” Vaishnaw added.



Images are for reference only.Images and contents gathered automatic from google or 3rd party sources.All rights on the images and contents are with their legal original owners.

Aggregated From –

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More