Nvidia has launched a new family of open‑source large language models, Nemotron 3, that it says are faster and cheaper to run than its previous offerings while handling longer, multi‑step tasks. The smallest model, Nemotron 3 Nano, is being released immediately with larger versions due in the first half of 2026, signaling Nvidia’s intent to deepen its role not just in AI hardware but in the model ecosystem itself. While Nvidia is best known for supplying chips to closed‑source players like OpenAI, it has been quietly building a catalog of open models that partners such as Palantir are weaving into their own products. The move comes as open‑source models from Chinese labs proliferate, raising competitive pressure on US incumbents and making it harder for Nvidia to simply rely on its hardware moat. Strategically, Nemotron 3 is Nvidia’s bid to stay central to the AI stack even if developers gravitate toward open ecosystems rather than proprietary frontier models.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.



