On March 8, 2026, the Guardian reported that UK support organizations have seen a sustained rise in reports of organized ritual abuse, with many victims saying ChatGPT helped them recognize and disclose their experiences. Child protection experts and police are rolling out new training as AI tools increasingly act as informal therapeutic aids.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This story shows how powerful language models are already functioning as de facto mental‑health tools, regardless of whether they were designed or regulated as such. Survivors of extremely stigmatized abuse are turning to ChatGPT for language, validation and signposting before they ever contact human services. That’s a testament to the accessibility and non‑judgmental nature of these systems—but also a stark reminder that we’re deploying them into psychologically fraught contexts without clinical oversight.
For the race to AGI, the implication is that even sub‑AGI models are starting to reshape how people narrate their own experiences, and how they interface with institutions like police and social services. As capabilities grow, pressure will mount to embed guardrails and referral pathways that align with public‑health goals, not just engagement metrics. Labs that can credibly demonstrate their models reduce barriers to care without amplifying conspiratorial thinking or false memories will have a strategic edge with regulators and large healthcare buyers.



