India advised major tech companies to seek government approval before they deploy new AI models.
What’s new: India’s Ministry of Electronics and Information Technology (MeitY) issued a nonbinding “advisory” to technology firms, including Google, Meta, and OpenAI, to seek government permission before releasing AI models their developers consider unreliable or still in testing.
How it works: The notice asks platforms and other intermediaries to label AI-generated media clearly and to warn customers that AI systems may output inaccurate information. It also says that models should avoid bias, discrimination, and undermining the integrity of the electoral process.
- Although the notice appears to apply to AI broadly, Rajeev Chandrasekhar, India’s Minister of State for Skill Development and Entrepreneurship, clarified that it applies to large, “significant” platforms and not to startups. He did not define “significant.” IT Minister Ashwini Vaishnaw added that the request is aimed at AI for social media, not agriculture or healthcare.
- The notice’s legal implications are ambiguous. It is not binding. However, Chandrasekhar said the new rules signal “the future of regulation” in India.
- Firms are asked to comply immediately and submit reports within 15 days of the notice’s March 1 publication date. Those that comply will avoid lawsuits from consumers, Chandrasekhar wrote.
Behind the news: India has regulated AI with a light touch, but it appears to be reconsidering in light of the growing role of AI-generated campaign ads in its upcoming elections.
- Recently, given a prompt that asked whether a particular Indian leader “is fascist,” Google’s Gemini responded that the leader in question had been “accused of implementing policies some experts have characterized as fascist.” This output prompted Indian officials to condemn Gemini as unreliable and potentially illegal. Google tweaked the model, pointing out that it’s experimental and not entirely reliable.
- In February, Chandrasekhar said the government would publish a framework to regulate AI by summer. The framework, which has been in development since at least May 2023, is intended to establish a comprehensive list of harms and penalties related to misuse of AI.
- In November and December, the Ministry of Electronics and Information Technology issued similar notices to social media companies. The statements advised them to crack down on deepfake videos, images, and audio circulating on their platforms.
Why it matters: National governments worldwide, in formulating their responses to the rapid evolution of AI, must balance the benefits of innovation against fears of disruptive technology. Fear seems to weigh heavily in India’s new policy. While the policy’s scope is narrower than it first appeared, it remains unclear what constitutes a significant platform, how to certify an AI model as reliable, whether services like ChatGPT are considered social platforms that would be affected, and how violations might be punished.
We’re thinking: While combating misinformation is important, forcing developers to obtain government approval to release new models will hold back valuable innovations. We urge governments to continue to develop regulations that guard against harms posed by specific applications while allowing general-purpose technology to advance and disseminate rapidly.