Pune Media

Infosys co-founder Nandan Nilekani has an important message for AI companies in India: ‘Don’t build one more…’

Infosys chairman and co-founder Nandan Nilekani has a suggestion for Indian AI companies. He has asked them to prioritise developing practical AI apps rather than competing in the crowded field of building new large language models (LLMs). He wants India to become the global leader in AI use-cases, moulding the technology to solve real-world problems and drive innovation across various sectors. He also asked Indian companies to focus more on the development of an infrastructure for data collection.

Nandan Nilekani’s suggestion to Indian AI companies

At Meta’s recent Build with AI summit in Bengaluru, Nilekani said: “Our goal should not be to build one more LLM. Let the big boys in the (Silicon) Valley do it, spending billions of dollars. We will use it to create synthetic data, build small language models quickly, and train them using appropriate data.
It’s all about data. How do we create the infrastructure for collecting the right data and make India the use case capital of AI globally where we actually deploy, add scale and speed in a frugal manner. Let other people build LLMs, we will make sure it works for people.”
Earlier, in May, he shared the same vision that put AI in the hands of Indian users at an event hosted by People+AI.
He said: “The Indian path in AI is different. We are not in the arms race to build the next LLM, let people with capital, let people who want to pedal chips do all that stuff… We are here to make a difference.”

Nandan Nilekani Meta’s Llama models

Nilekani also praised Meta for making its collection of foundational large language model (LLM) open source. He said that it is “game changer for us in India and something we need to take full advantage of.”
Last month, Meta updated its licensing terms for Llama AI models. This allows developers to use synthetic data generated by Llama to create and train other models.
In September, Meta also released the Llama 3.2 model with multimodal capabilities, enabling it to understand text and images simultaneously. This model comes in four variants, offering developers flexibility for different use cases.



Images are for reference only.Images and contents gathered automatic from google or 3rd party sources.All rights on the images and contents are with their legal original owners.

Aggregated From –

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More