On 6 May 2026, the UK Information Commissioner’s Office published new guidance to help public authorities handle freedom-of-information requests generated with AI tools. The document addresses rising volumes and complexity of AI-authored FOI requests, including those that misquote or misinterpret FOI law.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
The ICO’s FOI guidance is a small but telling example of how generative AI is starting to stress-test administrative systems. Public bodies are reporting higher volumes of AI‑drafted FOI requests, many of which misquote legislation or require substantial clarification before they can be processed. ([ico.org.uk](https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2026/05/new-guidance-to-support-public-authorities-dealing-with-ai-generated-foi-requests/)) Rather than proposing new law, the ICO is giving practitioners practical language and process advice so they can filter, clarify, and respond without breaching their statutory duties.
In the AGI context, this is a glimpse of a broader challenge: as models become more capable, they will amplify citizen and activist capacity to probe, litigate, and contest decisions at scale. That can be healthy for accountability, but only if institutions have the tooling and staffing to cope. Otherwise, you risk a kind of bureaucratic denial‑of‑service in which AI‑generated queries swamp human caseworkers. The ICO’s stance—that AI‑generated requests should still be treated fairly, but managed pragmatically—could evolve into a template for how courts, regulators, and agencies handle machine‑authored inputs more generally, from complaints to planning submissions.
For AI developers, this is a reminder that alignment isn’t just about not saying harmful things; it’s also about how systems shape downstream workloads in public infrastructure. A future with AGI‑grade assistants mass‑filing complex legal or FOI‑style actions will require not just technical safeguards, but active collaboration between labs and regulators on rate‑limiting, provenance, and machine‑readable formatting.


