On December 26, 2025, the Bombay High Court ordered social media platforms and websites to immediately remove AI‑generated and morphed images and videos of actor Shilpa Shetty. The court called the deepfake content “extremely disturbing and shocking” and said it violated her fundamental right to privacy and dignity.
This article aggregates reporting from 8 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This ruling is one of the clearest judicial signals yet that courts are willing to treat AI‑generated deepfakes as a direct violation of personality and privacy rights, not just as an online nuisance. By forcing platforms to promptly remove AI‑morphed images of a major celebrity and explicitly calling the content “extremely disturbing,” the Bombay High Court is setting practical expectations for platforms, intermediaries and AI tool providers on how they must respond when generative models are used for sexualised or reputational harm.
For the AI ecosystem, the case is less about model capability and more about liability and enforcement. As image and video generation quality keeps improving, the risk of convincing, abusive deepfakes moves from fringe to mainstream. India is a huge market for both social platforms and AI tools; clear court orders here will accelerate adoption of detection pipelines, watermarking, provenance metadata and stricter content‑moderation SLAs tied specifically to AI‑generated media. It also strengthens the argument that individuals have enforceable control over AI use of their likeness, which will matter as AGI‑class systems get better at mimicking specific people’s appearance, voice and mannerisms at scale.


