Media reports say Meta is building two new frontier AI models—Mango for image and video generation and Avocado as a next‑gen text and coding model—inside its Meta Superintelligence Labs unit. The systems are reportedly slated for launch in the first half of 2026 under chief AI officer Alexandr Wang.
This article aggregates reporting from 4 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
If these reports are accurate, Meta is re‑entering the frontier‑model arms race with a much clearer product thesis. Mango squarely targets high‑fidelity image and video generation, an area where OpenAI’s Sora and Google’s Gemini 3 have been setting the pace, while Avocado aims to shore up Meta’s perceived weaknesses in reasoning and code compared with closed competitors. Bundled under “Meta Superintelligence Labs” and run by Alexandr Wang, this looks like a deliberate bet on tightly integrated, closed‑source systems rather than the mostly open Llama playbook.
Strategically, this matters because Meta is one of the few players with both the balance sheet and distribution to contest OpenAI, Google and potentially xAI at the very top end of the stack. A successful Mango would give Meta a native foundation for Reels‑style video creation, ads and creator tools, instead of relying on partnerships. Avocado, if it closes the gap in coding and agentic behaviour, would make it easier for Meta to power its own assistants across WhatsApp, Instagram and hardware. For Race to AGI readers, this is another sign that the frontier contest is consolidating into a small club of firms building multi‑modal “world models” with tightly coupled video, language and control.



