On April 5, 2026, FutureGenNews reported on unverified leaks about an internal OpenAI model codenamed “Spud,” potentially to be released as GPT‑5.5 or an early GPT‑6. The leaks claim Spud is a natively omnimodal, mixture‑of‑experts model with trillions of parameters designed for autonomous, agentic workflows over long time horizons.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
If even half of the Spud rumors are accurate, OpenAI is preparing to move from chatbots toward something closer to a persistent, omnimodal operating system for knowledge work. A natively trained text‑image‑audio‑video model with agentic behaviour and long time horizons would mark a qualitative shift from GPT‑4‑class models that still behave like smart autocomplete with bolted‑on tools. It would also be a direct answer to Anthropic’s push into long‑context reasoning and Google’s Gemini 2.5 Pro trajectory, signalling that OpenAI is not ceding the “frontier” label.([futuregennews.com](https://futuregennews.com/gpt-5-5-spud-leaks-reveal-openais-leap-into-omnimodal-agentic-ai))
Strategically, a true omnimodal agent would accelerate automation in areas that today require messy glue code between separate models — think continuous research agents that read documents, watch videos, attend meetings, write code and trigger workflows with minimal human mediation. That is precisely the class of capability that many safety researchers worry blurs the line between powerful tool and proto‑autonomous actor. The reported reallocation of GPU resources away from Sora toward Spud, if confirmed, also reinforces that OpenAI is prioritising economically impactful cognition over consumer‑facing entertainment.
Because these are leaks, they should be treated cautiously. But for competitors, the rational response is to assume OpenAI is aiming for a large jump in reasoning, autonomy and multimodal competence within the next product cycle. That effectively shortens perceived AGI timelines inside rival labs and investors’ models, raising the pressure to either match the capability or differentiate along safety, openness or domain specialization.