Groq announced on December 24, 2025 that it has entered a non-exclusive licensing agreement for its AI inference technology with Nvidia, and that founder Jonathan Ross and other leaders will join Nvidia. On December 25, 2025, Reuters and other outlets reported the deal as a major AI ‘acquihire’, while CNBC-linked reports continue to suggest Nvidia is paying around $20 billion for Groq’s assets, a figure neither company has confirmed.
This article aggregates reporting from 8 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This deal is a textbook example of how power in the AI stack is consolidating around a few incumbents without always triggering a formal M&A event. By licensing Groq’s low-latency inference architecture and hiring its founding technical leadership, Nvidia is effectively absorbing one of the few startups that had a differentiated hardware story for LLM inference, while allowing GroqCloud to continue as a nominally independent service. For anyone betting on alternative AI accelerators, this is a clear signal: if you show real traction, the default outcome is to be pulled into a major platform’s orbit, one way or another.
Strategically, Nvidia is shoring up the part of its portfolio that was most vulnerable: high-throughput, low-cost inference at scale. Groq’s SRAM-heavy LPU design is optimized for exactly the workloads that define today’s agentic and reasoning-heavy AI applications, where latency and cost per token matter as much as raw FLOPs. At the same time, regulators will see this as another "acquihire by license" maneuver that transfers IP and talent without a clean-cut acquisition. In the race to AGI, it reinforces Nvidia’s gravitational pull over both compute and the people who know how to push it to the edge.