On December 19, 2025, India’s Law Ministry told the Lok Sabha there is still no formal policy or binding guidelines for artificial intelligence use in courts. The government confirmed that AI tools like LegRAA and Digital Courts 2.1 are being piloted under the e‑Courts Project Phase III, but only within defined experimental scopes.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
India’s judiciary is quietly experimenting with AI while deliberately holding back on formal policy. That tension captures where many large democracies are: they don’t want to miss efficiency gains from AI-assisted research, transcription, and translation, but they also know that opaque models plus due‑process rights is a combustible mix. By keeping tools like LegRAA and Digital Courts 2.1 in a constrained pilot phase, the system is buying time to understand bias, explainability, and accountability questions before AI touches verdicts or sentencing.
For the race to AGI, this is less about raw capability and more about deployment legitimacy. AGI-scale systems will only be allowed into high‑stakes domains like courts if governments build credible guardrails now at the narrow‑AI stage. India’s decision not to rush into a blanket AI‑in‑courts framework suggests a preference for incrementalism and experimentation rather than sweeping mandates or bans. That approach may slow headline‑grabbing deployments but could produce more robust governance patterns other countries copy. If India, with its vast and linguistically diverse caseload, can design AI support systems that improve access to justice without undermining fairness, it will set a powerful precedent for how AGI-era tools might be trusted in public institutions.

