The government’s push towards building India’s domestic artificial intelligence (AI) models has begun with making graphics processing units (GPUs) available to start-ups/enterprises but the stakeholders and the industry believe that a whole research ecosystem needs to develop before India can build a full stack.
Recently, the Centre announced empanelment of 18,693 GPUs under common computing facility. The empanelled firms include CMS Computers, Jio Platforms, Tata Communications, E2E Networks, Yotta Data Services, and others, that have offered various AI compute units (GPUs) such as Intel Gaudi 2, AMD MI300X, NVIDIA H100/H200, AWS Inferentia2, and AWS Trainium.
A lot more needs to be done
However, the stakeholders that businessline spoke to said that a lot more needs to be done.
“It’s encouraging to see the Indian government backing the development of a foundational AI model and investing across the full AI stack – infrastructure, foundational models, and application development. Given AI’s transformative potential, building models attuned to the Indian context is desirable. However, if we truly aim to build the full stack, we must look beyond compute infrastructure and systematically unlock our research ecosystem,” Rohit Kumar, Founding Partner at The Quantum Hub said.
He said India’s R&D environment remains “constrained”, with both public and private investment significantly trailing behind countries like the US and China, in absolute terms as well as a share of GDP.
‘We produce fewer PhDs’
“We produce fewer PhDs and academic papers than we should, given our size and ambitions, and our top institutions operate under far greater constraints than those in competing economies. In my view, this is the real bottleneck. If India wants to be at the cutting edge of AI, the starting point must be strengthening our research institutions and significantly scaling up funding for R&D,” Kumar added.
Vijay Shekhar Sharma, CEO of Paytm said that core-in-core reasoning model or the foundational model requires a lot of training. Fundamentally, the training cost subsidisation is a good approach to accelerate.
Intelligent move
“It is equivalent to, say, tax discount… so this is a very sharp and wise move (by the government) that in case you want to build something led by AI where it can be a foundational/ sovereign model, here is a compute that we want to offer… This is an intelligent move by the government,” he said.
Kunal Bahl, Co-founder of Titan Capital and Snapdeal said innovation is getting cheaper and nimbler.
“It is too early to say, but I think the hope is that the costs go down,” he said.
Tushar Vashisht, Co-founder and CEO at Bengaluru-based Healtify said great products and technologies are not built with subsidies. “They are built with talent and with quality. So I think as long as we can have good talent quality focus, that’s what we need to do, right now,” he said.
According to Akshay Gugnani, Founder, Expertia AI, the cost will continue to be a strong factor but as large language models (LLMs) reach their peak performance, they will get more and more commoditised.
“As they get commoditised, the prices will drop sharply. In the last 12 months alone, OpenAI token costs have gone down 90 per cent. With more competitor’s and open sourced models, the prices will only drop,” he said.
He further added that the government’s initiative is reinforcing core research funding required in India for building deep-tech. As grants or GPU support, this will enable researchers and developers access to resource and capital to make bigger bets.