On December 27, 2025, the Calcutta High Court said IndiaMART’s exclusion from ChatGPT search results appeared to be a prima facie case of “selective discrimination.” The court declined interim relief but scheduled a full hearing for January 13, 2026, after IndiaMART alleged OpenAI had deliberately blocked its listings while surfacing rival marketplaces.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This case captures a tension that will only grow as foundation models become default interfaces to the web: who gets to be visible inside the AI’s world model, and on what terms? IndiaMART is arguing that being delisted inside ChatGPT is not a neutral ranking decision but an economically harmful act of discrimination against an Indian platform, potentially rooted in U.S. trade‑complaint politics. If courts begin treating AI assistants as gatekeepers analogous to search engines or app stores, systematic exclusion of certain domains could be framed as an unfair trade practice rather than just model behavior. ([timesofindia.indiatimes.com](https://timesofindia.indiatimes.com/city/kolkata/selective-discrimination-against-indiamart-hc-tells-openai/articleshow/126195293.cms))
For OpenAI and its peers, this raises the bar on transparency and due‑process around both training data and retrieval filters. As AGI‑class systems get closer to being general discovery tools for everything—products, jobs, news, services—power over indexing will be politically contested in every major market. India’s courts weighing in at this granularity is an early sign that countries won’t simply accept U.S. AI companies’ default curation choices, especially where local champions feel disadvantaged. Expect more litigation around ranking, blocking and “shadow banning” inside AI assistants, with direct implications for how these systems are architected and governed.