RegulationSaturday, December 20, 2025

India reiterates strict controls on high‑risk AI under new governance rules

Source: Lokmat Times (IANS)Read original
Won’t allow unrestricted deployment of high-risk AI systems: Minister - www.lokmattimes.com

TL;DR

AI-Summarized

On December 20, 2025, India’s Minister of State for Electronics and IT said the country’s AI governance guidelines prohibit unrestricted deployment of high‑risk AI systems. The framework, released in November, relies on existing sectoral regulators and laws while emphasizing a risk‑based, evidence‑led approach.

About this summary

This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.

Race to AGI Analysis

India is quietly hardening its AI policy stance without reaching for heavy‑handed new institutions. The AI Governance Guidelines, reaffirmed here by Minister Jitin Prasada, bar the unrestricted use of high‑risk AI systems but lean on existing sectoral regulators and laws—IT Act, data protection statute, sectoral rules—rather than creating a new AI super‑regulator.([lokmattimes.com](https://www.lokmattimes.com/technology/wont-allow-unrestricted-deployment-of-high-risk-ai-systems-minister-1/)) That signals continuity with India’s broader digital playbook: aggressive adoption with guardrails layered onto existing legal machinery.

For the race to AGI, India’s approach matters because of its sheer scale as a user base and data source. By insisting on a risk‑based, “techno‑legal” regime and funding domestic R&D in areas like deepfake detection and privacy‑preserving tools, New Delhi is trying to stay attractive to global AI providers while nudging them toward safer deployments and local capacity building.([lokmattimes.com](https://www.lokmattimes.com/technology/wont-allow-unrestricted-deployment-of-high-risk-ai-systems-minister-1/)) This is less Brussels‑style command‑and‑control and more a calibrated middle path between the US and EU models.

If India can maintain this balance, it becomes a key proving ground for how frontier models behave in a diverse, low‑resource, multilingual environment under light but real constraints—data that will be invaluable to anyone serious about robust, globally deployed AGI.

Who Should Care

InvestorsResearchersEngineersPolicymakers