Pune Media

India is a key region for AI deployment: IBM India MD Sandip Patel

Global technology company IBM sees India as one of its key regions for deploying artificial intelligence (AI) and talent availability, believes Sandip Patel, Managing Director of IBM India.

“India is where you can scale AI for the right kind of business impact,” said Patel at the company’s flagship event, IBM Think, on Wednesday.

In India, IBM sees AI applications across enterprises, governments, and even the startup community. Giving an example, Patel highlighted the work done in the BFSI space, where financial institutions leverage AI to get productivity gains and savings.

IBM India Managing Director Sandip Patel

At the event, IBM also announced its partnership with Star Union Dai-ichi Life Insurance (SUD Life) and QuantumStreet AI to design an investment product to provide superior performance in the large market capitalisation (large cap) space.

To address this, the two firms plan to leverage the capabilities of the IBM watsonx platform to generate insights that will form the core of the investment product and differentiate the offering.

Patel emphasised that there has been a growing trend among Indian enterprises to get their data organised as they see the value of analytics, which will be powered through AI.

While IBM has been focused on hyper-cloud, AI, and automation technologies, it does face challenges in bringing all these three elements together. Patel highlighted that the cloud infrastructure is available in multiple locations, while there has been an explosion of data. Next, there is a lack of required skills, and lastly, the challenge of a lower level of automation and the question of providing solutions most securely.

On the availability of AI skills in India, Patel said, “In India, we have a wealth of talent, which has inherent skills that can be easily trained.” He added that many non-technical talents can be trained for technical roles. He also mentioned that there is a growing number of enterprises in India training their employees in AI.

In the rapidly changing world of AI and Gen AI, there is a growing debate on the usage of large language models (LLMs) and small language models (SLMs).

IBM Asia-Pacific General Manager Hans Dekkers said, “When you look at AI today, you look at the amount of energy alone that goes into creating a large model. It’s pretty inefficient and unsustainable.”

He opined that smaller AI models would be more useful to enterprises, as they would be more energy efficient and purpose-driven.



Images are for reference only.Images and contents gathered automatic from google or 3rd party sources.All rights on the images and contents are with their legal original owners.

Aggregated From –

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More